Participants are invited to bring their own AI training scripts to the workshop, where they will receive personalized support to adapt and run them on LUMI's advanced GPU system. Whether you aim to leverage a single GPU or scale up to multiple GPUs, our workshop will provide valuable insights and practical skills to enhance your AI projects with LUMI's powerful computing infrastructure.
Online Attendance Option
For those unable to attend in person, we are pleased to offer the option to join the lectures online. While the interactive hands-on exercises and personalized support for implementing your own workflows will be exclusive to in-person attendees, remote participants will still benefit from the comprehensive lectures streamed live from the workshop.
Requirements
Participants are expected to have basic experience with:
- Working on a Linux command line
- Using Python and one or more of the Python AI frameworks PyTorch, Tensorflow, or JAX
- Training an AI model on at least a single GPU, e.g. using a laptop, workstation, or cloud service
- Managing Python environments, e.g. using the Conda and/or pip package managers
Participants are expected to bring a laptop to the workshop, including a charger and, if needed, a power travel adapter compatible with the Type K outlet used in Denmark.
Learning outcomes
Attending the workshop, you will acquire an understanding of the LUMI-G architecture tailored for AI training, including an introduction to SLURM, ROCm, the Lustre/LUMI-O file systems, and the Slingshot 11 interconnect. Specifically, you will:
- Learn to utilize existing AI containers on LUMI and build your own using the container build tool, cotainr
- Learn to distribute AI workloads across multiple GPUs within a single LUMI-G node
- Explore strategies for scaling AI workloads across numerous GPUs distributed over several LUMI-G nodes
- Gain insight into advanced topics for optimizing AI training processes on the LUMI supercomputer
Agenda
The workshop consists of a mix of short lectures and hands-on exercises, that cover the following key topics:
- LUMI-G architecture overview and its applications in AI
- Introduction to the LUMI web-interface for development and monitoring
- Using the AI framework PyTorch on LUMI
- Building and deploying custom AI containers on LUMI
- Strategies for scaling AI workloads across multiple GPUs
- Get support to adapt and run your own AI training script on LUMI
Each day will run from 9:00 to 16:30 CEST, with breaks scheduled throughout.
Registration
Deadline: May 17, 2024, at 16:00 CEST