SC19 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

End-to-End I/O Portfolio for the Summit Supercomputing Ecosystem

Authors: Sarp Oral (Oak Ridge National Laboratory, OpenSFS Inc), Sudharshan S. Vazhkudai (Oak Ridge National Laboratory), Feiyi Wang (Oak Ridge National Laboratory), Christopher Zimmer (Oak Ridge National Laboratory), Christopher Brumgard (Oak Ridge National Laboratory), Jesse Hanley (Oak Ridge National Laboratory), George Markomanolis (Oak Ridge National Laboratory), Ross Miller (Oak Ridge National Laboratory), Dustin Leverman (Oak Ridge National Laboratory), Scott Atchley (Oak Ridge National Laboratory), Verónica G. Melesse Vergara (Oak Ridge National Laboratory)

Abstract: The I/O subsystem for the Summit supercomputer, No. 1 on the Top500 list, and its ecosystem of analysis platforms is composed of two distinct layers: the in-system layer and the center-wide parallel file system layer (PFS), Spider 3. The in-system layer uses node-local SSDs and provides 26.7 TB/s for reads, 9.7 TB/s for writes, and 4.6 billion IOPS to Summit. The Spider 3 PFS layer uses IBM’s Spectrum ScaleTM and provides 2.5 TB/s and 2.6 million IOPS to Summit and other systems. While deploying them as two distinct layers was operationally efficient, it also presented usability challenges in terms of multiple mount points and lack of transparency in data movement. To address these challenges, we have developed novel end-to-end I/O solutions for the concerted use of the two storage layers. We present the I/O subsystem architecture, the end-to-end I/O solution space, their design considerations and our deployment experience.

Presentation: file

Back to Technical Papers Archive Listing