SC19 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Poster 142: Training Deep Neural Networks Directly on Hundred-Million-Pixel Histopathology Images on a Large-Scale GPU Cluster

Authors: Chi-Chung Chen (AetherAI, Taiwan), Wen-Yu Chuang (Chang-Gung Memorial Hospital, Taiwan), Wei-Hsiang Yu (AetherAI, Taiwan), Hsi-Ching Lin (National Center for High-Performance Computing (NCHC), Taiwan), Shuen-Tai Wang (National Center for High-Performance Computing (NCHC), Taiwan), Fang-An Kuo (National Center for High-Performance Computing (NCHC), Taiwan), Chao-Chun Chuang (National Center for High-Performance Computing (NCHC), Taiwan), Chao-Yuan Yeh (AetherAI, Taiwan)

Abstract: Deep learning for digital pathology is challenging because the resolution of whole-slide-images (WSI) is extremely high, often in billions. The most common approach is patch-based method, where WSIs are divided into small patches to train convolutional neural networks (CNN). This approach has significant drawbacks. To have ground truth for individual patches, detailed annotations by pathologists are required. This laborious process has become the major impediment to the development of digital pathology AI. End-to-end WSI training, however, faces the difficulties of fitting the task into limited GPU memory. In this work, we improved the efficiency of using system memory for GPU compute by 411% through memory optimization and deployed the training pipeline on 8 nodes, totally 32 GPUs distributed system, achieving 147.28x speedup. We demonstrated that CNN is capable of learning features without detailed annotations. The trained CNN can correctly classify cancerous specimen, with performance level closely matching the patch-based methods.

Best Poster Finalist (BP): no

Poster: PDF
Poster summary: PDF

Back to Poster Archive Listing