SC19 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Channel and Filter Parallelism for Large-Scale CNN Training

Authors: Nikoli Dryden (University of Illinois, Lawrence Livermore National Laboratory), Naoya Maruyama (Lawrence Livermore National Laboratory), Tim Moon (Lawrence Livermore National Laboratory), Tom Benson (Lawrence Livermore National Laboratory), Marc Snir (University of Illinois), Brian Van Essen (Lawrence Livermore National Laboratory)

Abstract: Accelerating large-scale CNN training is needed to keep training times reasonable as datasets grow larger and models become more complex. Existing frameworks primarily scale using data-parallelism, but this is limited by the mini-batch size, which cannot grow arbitrarily. We introduce three algorithms that partition channel or filter data to exploit parallelism beyond the sample dimension. Further, they partition the parameters of convolutional layers, replacing global allreduces with segmented allreduces---smaller, concurrent allreduces among disjoint processor sets. These algorithms enable strong scaling, reduced communication overhead, and reduced memory pressure, enabling training of very wide CNNs.

We demonstrate improved strong and weak scaling, including up to 4.1x reductions in training time for residual networks and 4x reductions in allreduce overhead. We also show that wider models provide improved accuracy on ImageNet. We study the current limitations of our algorithms and provide a direction for future optimizations of large-scale deep learning frameworks.

Presentation: file

Back to Technical Papers Archive Listing