You are invited to a hybrid workshop on Data Parallelism: How to Train Deep Learning Models on Multiple GPUs, organized by the National Competence Centers in HPC of the Czech Republic on 4 October 2023.
At the workshop, you will learn how to achieve maximum throughput during data-parallel deep learning training with multiple GPUs. You will gain an understanding of algorithmic considerations that are specific to achieving high performance and accuracy in multi-GPU training. Practical distribution of training to multiple GPUs using PyTorch Distributed Data Parallel will be also covered.
Before the workshop, please create an NVIDIA developer account using the same email address as for workshop registration.