SC19 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

AIBench: Toward a Comprehensive AI Benchmark Suite for HPC, Datacenter, Edge, and IoT


Authors: Jianfeng Zhan (Institute of Computing Technology, Chinese Academy of Sciences), Xiaoyi Lu (Ohio State University), Wanling Gao (Institute of Computing Technology, Chinese Academy of Sciences)

Abstract: As diverse communities pay great attention to innovative AI or machine learning algorithms, architecture, and systems, the pressure of benchmarking rises. However, complexity, diversity, frequently changed workloads, and rapid evolution of AI workloads and systems raise great challenges in AI benchmarking. The aim of this BoF is to discuss how to build a comprehensive AI benchmark suite across different communities with an emphasis on data and workload distributions among HPC, data center, Edge, and IoT.

Long Description: As architecture, system, data management, and machine learning communities pay great attention to innovative AI or machine learning algorithms, architecture, and systems, the pressure of benchmarking rises. However, complexity, diversity, frequently changed workloads, and rapid evolution of AI systems raise great challenges in benchmarking. First, for the sake of conciseness, benchmarking scalability, portability cost, reproducibility, and better interpretation of performance data, we need understand what are the most time-consuming classes of unit of computation among big data and AI workloads. Second, for the sake of fairness, the benchmarks must include diversity of data and workloads. Third, for co-design of software and hardware, we need simple but elegant abstractions that help achieve both efficiency and general-purpose. The aim of this BoF is to discuss how to build a comprehensive AI benchmark suite for HPC, data center, Edge, and IoT with an emphasis on data and workload distributions among them. Now is the perfect time to start this project. Together with participants from different communities, we will discuss the following issues. (1) What is the scalable benchmarking methodology other than creating a new benchmark or proxy for every possible AI workload? (2) What are the essentials of real-world AI workloads and applications? How to specify the common requirements of the AI workloads only algorithmically in a paper-and-pencil approach? (3) How to contribute the data sets as most of the owners consider data as confidential? (4) How to build end-to-end application benchmarks without loss of the flexibility of micro and component benchmarks. (5) What are the metrics of evaluating different AI systems? (6) What is the impact of data and workload distributions among HPC, datacenter, edge and IoT on the system performance behaviors and workload characterization.

All session leaders and invited speakers have strong experience in AI. We will select five speakers from session leaders and leading experts in academia, government, and industry for presentation. Audience participation is a key success metric of this BOF, and 50% the time has been allocated for discussion.


URL: http://www.benchcouncil.org/BoF/SC19.html


Back to Birds of a Feather Archive Listing