SC19 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

MLPerf: A Benchmark for Machine Learning


Authors: Tom St. John (Tesla Inc), Peter Mattson (Google Brain)

Abstract: Machine learning applications are rapidly expanding into scientific domains and challenging the hallmarks of traditional high performance computing workloads. This BoF presents MLPerf, a community-driven system performance benchmark which spans a range of individual machine learning tasks. The speakers at this BoF are experts in the fields of high performance computing, machine learning, and computer architecture, representing academia, government research organizations, and private industry. The goal of this session is to introduce MLPerf to the broader HPC community and solicit input from interested parties to drive the further adoption of this benchmark.

Long Description: Deep learning is transforming the field of machine learning (ML) from theory to practice. Following the widespread adoption of machine learning over the past several years, ML workloads now stand together with traditional scientific computing workloads in the high performance computing application space. These workloads have sparked a renaissance in computer system design. Both academics and the industry are scrambling to integrate ML-centric designs into their products and numerous research efforts are focused on scaling up ML problems to extreme-scale systems.

Despite the breakneck pace of innovation, there is a crucial issue affecting the research and industry communities at large: how to enable fair and useful benchmarking of ML software frameworks, ML hardware accelerators, and ML systems. The ML field requires systematic benchmarking that is both representative of real-world use-cases and useful for making fair comparisons across different software and hardware platforms.

MLPerf answers the call. MLPerf is a machine learning benchmark standard driven by industry (40+ companies) and engineers and researchers (1000+) at large. The benchmark suite comprises a set of key machine learning training and inference workloads that are representative of important production use cases, ranging from image classification and object detection to recommendation. MLPerf has already gone through two rounds of training results and has recently released a new inference benchmark suite, as well.

In this session, we will focus on the following topics of discussion:

- A senior member from the MLPerf committee will present the existing structure and design choices selected during the creation of the MLPerf benchmark suite.

- Key stakeholders will present their perspectives on MLPerf and explain how MLPerf provides value to their organizations.

- One or more representatives from national HPC research centers will discuss their unique needs when quantifying performance of their machine learning workloads on large-scale systems and how these requirements should be considered in the evolution of the MLPerf suite.

- We will host an interactive community session where interested members of the audience can ask questions of the speakers to drive discussion focused on how to best address the needs of the ML-oriented HPC community.


URL: https://mlperf.org/


Back to Birds of a Feather Archive Listing