SC19 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Abstract: Data parallelism has become the de facto standard for training Deep Neural Network on mul- tiple processing units. In this work we propose DC- S3GD, a decentralized (without Parameter Server) stale-synchronous version of the Delay-Compensated Asynchronous Stochastic Gradient Descent (DC- ASGD) algorithm. In our approach, we allow for the overlap of computation and communication, by averaging in parameter space and compensating the inherent error with a first-order correction of the locally computed gradients. We prove the effectiveness of our approach by training Convolutional Neural Network with large batches and achieving state-of- the-art results.






Back to Deep Learning on Supercomputers Archive Listing


Back to Full Workshop Archive Listing