Tutorial
:
High Performance Distributed Deep Learning: A Beginner's Guide
Event Type
Tutorial
Registration Categories
TUT
Tags
Deep Learning
Introductory
TimeSunday, 17 November 20198:30am - 12pm
Location201
DescriptionThe recent advances in Deep Learning (DL) has led to many exciting challenges and opportunities for Computer Science and AI researchers alike. Modern DL frameworks like TensorFlow, PyTorch, Cognitive Toolkit, Caffe2, and several others have emerged that offer ease of use and flexibility to train, and deploy various types of Deep Neural Networks (DNNs).

In this tutorial, we will provide an overview of interesting trends in DNN design and how cutting-edge hardware architectures are playing a key role in moving the field forward. We will also present an overview of different DNN architectures and DL frameworks. Most DL frameworks started with a single-node/single-GPU design. However, approaches to parallelize the process of DNN training are also being actively explored. The DL community has moved along different distributed training designs that exploit communication runtimes like gRPC, MPI, and NCCL. We will highlight new challenges and opportunities for communication runtimes to efficiently support distributed DNN training. We also highlight some of our co-design efforts to utilize CUDA-aware MPI for large-scale DNN training on modern GPU clusters. Finally, we include hands-on exercises to enable the attendees to gain first-hand experience of running distributed DNN training experiments on a modern GPU cluster.
Back To Top Button