SC19 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Poster 135: High-Performance Deep Learning via a Single Building Block

Authors: Evangelos Georganas (Intel Corporation), Kunal Banerjee (Intel Corporation), Dhiraj Kalamkar (Intel Corporation), Sasikanth Avancha (Intel Corporation), Anand Venkat (Intel Corporation), Michael Anderson (Intel Corporation), Greg Henry (Intel Corporation), Hans Pabst (Intel Corporation), Alexander Heinecke (Intel Corporation)

Abstract: Deep learning (DL) is one of the most prominent branches of machine learning. Due to the immense computational cost of DL workloads, industry and academia have developed DL libraries with highly-specialized kernels for each workload/architecture, leading to numerous, complex code-bases that strive for performance, yet they are hard to maintain and do not generalize. In this work, we introduce the batch-reduce-GEMM kernel and show how the most popular DL algorithms can be formulated with this kernel as basic building-block. Consequently, the DL library-development degenerates to mere (potentially automatic) tuning of loops around this sole optimized kernel. By exploiting our kernel we implement Recurrent Neural Networks, Convolution Neural Networks and Multilayer Perceptron training and inference primitives in just 3K lines of high-level-code. Our primitives outperform vendor-optimized libraries on multi-node CPU-Clusters. We also provide CNN kernels targeting GPUs. Finally, we demonstrate that batch-reduce-GEMM kernel within a tensor compiler yields high-performance CNN primitives.

Best Poster Finalist (BP): no

Poster: PDF
Poster summary: PDF

Back to Poster Archive Listing