DescriptionHigh-performance computing is seeing an upsurge in workloads that require data analysis. Machine learning and Deep learning models are used in several science domains such as cosmology, particle physics, biology with data in unprecedented scale from simulations. These applications include tasks such as image detection, segmentation, synthetic data generation and in-situ data analysis. Emerging HPC systems have diverse hardware including many-core, multi-core and heterogeneous accelerators. It is critical to understand the performance of Machine learning/deep learning models on HPC systems at scale. Benchmarking will help to better understand the model-system interactions and help co-design future HPC systems for ML workloads.