How to Submit
- Preparing Your Submission
- Where to Submit
- Reproducibility Initiative
- Review Criteria
- Review, Response, Revision
How to Submit
The SC Papers program is the leading venue for presenting high-quality original research, groundbreaking ideas, and compelling insights on future trends in high performance computing, networking, storage, and analysis. Technical papers are peer-reviewed and an Artifact Description is now mandatory for all papers submitted to SC19.
Submissions will be considered on any topic related to high performance computing within the ten tracks below. Small-scale studies – including single-node studies –are welcome as long as the paper clearly conveys the work’s contribution to high-performance computing.
The development, evaluation and optimization of scalable, general-purpose, high performance algorithms.
- Algorithmic techniques to improve energy and power efficiency
- Algorithmic techniques to improve load balance
- Data-intensive parallel algorithms
- Discrete and combinatorial problems
- Fault-tolerant algorithms
- Graph and network algorithms
- Hybrid/heterogeneous/accelerated algorithms
- Numerical methods and algebraic systems
- Scheduling algorithms
- Uncertainty quantification
- Other high performance algorithms
The development and enhancement of algorithms, parallel implementations, models, software and problem solving environments for specific applications that require high performance resources.
- Bioinformatics and computational biology
- Computational earth and atmospheric sciences
- Computational materials science and engineering
- Computational astrophysics/astronomy, chemistry, and physics
- Computational fluid dynamics and mechanics
- Computation and data enabled social science
- Computational design optimization for aerospace, energy, manufacturing, and industrial applications
- Computational medicine and bioengineering
- Other high performance applications
- Use of uncertainty quantification, statistical, and machine-learning techniques to improve a specific HPC application
- Improved models, algorithms, performance or scalability of specific applications and respective software
Architecture and Networks
All aspects of high performance hardware including the optimization and evaluation of processors and networks.
- Memory systems: caches, memory technology, non-volatile memory, memory system architecture (to include address translation for cores and accelerators)
- I/O architecture/hardware and emerging storage technologies
- Network protocols, quality of service, congestion control, collective communication
- Scalable and composable coherence (for cores and accelerators)
- Multi-processor architecture and micro-architecture (e.g. reconfigurable, vector, stream, dataflow, GPUs, and custom/novel architecture)
- Interconnect technologies, topology, switch architecture, optical networks, software-defined networks
- Architectures to support extremely heterogeneous composable systems (e.g., chiplets)
- Secure architectures, side-channel attacks, and mitigation
- Power-efficient design and power-management strategies
- Resilience, error correction, high availability architectures
- Software/hardware co-design, domain specific language support
- Evaluation and measurement on testbed or production hardware systems
- Hardware acceleration of containerization and virtualization mechanisms for HPC
Clouds and Distributed Computing
All software aspects of clouds and distributed computing that are related to HPC systems, including software architecture, configuration, optimization and evaluation.
- Compute and storage cloud architectures including many-core computing and accelerators in the cloud
- HPC and cloud convergence at infrastructure and software level
- Innovative methods for using cloud-based systems for HPC applications
- Support and tuning of Big Data cloud data ecosystems on HPC infrastructures
- Parallel programming models and tools at the intersection of cloud and HPC
- Virtualization and containerization for HPC, virtualized high performance I/O network interconnects, parallel and distributed file systems in virtual environments
- Cloud workflow, data, and resource management including dynamic resource provisioning
- Methods, systems, and architectures for scalable data stream processing
- Scheduling, load balancing, resource provisioning, energy efficiency, fault tolerance, and reliability for cloud computing
- Self-configuration, management, information services, and monitoring
- Service-oriented architectures and tools for integration of clouds, clusters, and distributed computing
- Cloud security and identity management
- Science case studies on cloud infrastructure
- Machine learning for science in the cloud
Data Analytics, Visualization, and Storage
All aspects of data analytics, visualization, storage, and storage I/O related to HPC systems. Submissions on work done at scale are highly favored.
- Databases and scalable structured storage for HPC
- Data mining, analysis, and visualization for modeling and simulation
- Data analytics and frameworks supporting data analytics
- Ensemble analysis and visualization
- I/O performance tuning, benchmarking, and middleware
- Scalable storage systems
- Next-generation storage systems and media
- Parallel file, object, key-value, campaign, and archival systems
- Provenance, metadata, and data management
- Reliability and fault tolerance in HPC storage
- Scalable storage, metadata, namespaces, and data management
- Storage tiering, entirely on-premise internal tiering as well as tiering between on-premise and cloud
- Storage innovations using machine learning such as predictive tiering, failure, etc.
- Storage networks
- Cloud-based storage
- Storage systems for data-intensive computing
- Data science
- Cloud-based analytics at scale
- Innovations for visualizing data at scale (big data)
- Innovations in image processing of data at scale (big data)
Machine Learning and HPC (New for SC19)
The development and enhancement of algorithms, systems, and software for scalable machine learning utilizing high-performance and cloud computing platforms.
- Machine learning and optimization models for extreme scale systems
- Enhancing applicability of machine learning in HPC (e.g. usability)
- Learning large models / optimizing hyper parameters (e.g. deep learning, representation learning)
- Facilitating very large ensembles in extreme scale systems
- Training machine learning models on large datasets and scientific data
- Overcoming the machine learning problems inherent to large datasets (e.g. noisy labels, missing data, scalable ingest)
- Large scale machine learning applications utilizing HPC
- Future research challenges for machine learning at large scale
- Hybrid machine learning algorithms for hybrid HPC compute architectures
- Systems, compilers, and languages for machine learning at scale
Performance Measurement, Modeling, and Tools
Novel methods and tools for measuring, evaluating, and/or analyzing performance for large scale systems.
- Analysis, modeling, prediction, or simulation methods
- Empirical measurement techniques on HPC systems
- Scalable tools and instrumentation infrastructure for measurement, monitoring, and/or visualization of performance
- Novel and broadly applicable performance optimization techniques
- Methodologies, metrics, and formalisms for performance analysis and tools
- Workload characterization and benchmarking techniques
- Performance studies of HPC subsystems such as processor, network, memory, accelerators, and storage
- System-design tradeoffs between different measures of performance (e.g., performance and resilience, performance and security)
Technologies that support parallel programming for large-scale systems as well as smaller-scale components that will plausibly serve as building blocks for next-generation HPC architectures.
- Parallel programming languages, libraries, models, and notations
- Programming language and compilation techniques for reducing energy and data movement (e.g., precision allocation, use of approximations, tiling)
- Solutions for parallel-programming challenges (e.g., interoperability, memory consistency, determinism, race detection, work stealing, or load balancing)
- Parallel application frameworks
- Tools for parallel program development (e.g., debuggers and integrated development environments)
- Program analysis, synthesis, and verification to enhance cross-platform portability, maintainability, result reproducibility, resilience (e.g., combined static and dynamic analysis methods, testing, formal methods)
- Compiler analysis and optimization; program transformation
- Runtime systems as they interact with programming systems
State of the Practice
All R&D aspects of the pragmatic practices of HPC, including operational IT infrastructure, services, facilities, large-scale application executions and benchmarks.
- Bridging of cloud data centers and supercomputing centers
- Comparative system benchmarking over a wide spectrum of workloads
- Deployment experiences of large-scale infrastructures and facilities
- Facilitation of “big data” associated with supercomputing
- Long-term infrastructural management experiences
- Pragmatic resource management strategies and experiences
- Procurement, technology investment and acquisition best practices
- Quantitative results of education, training and dissemination activities
- User support experiences with large-scale and novel machines
- Infrastructural policy issues, especially international experiences
- Software engineering best practices for HPC
Operating system (OS), runtime system and other low-level software research & development that enables allocation and management of hardware resources for HPC applications and services.
- Alternative and specialized parallel operating systems and runtime systems
- Approaches for enabling adaptive and introspective system software
- Communication optimization
- Software distributed shared memory systems
- System-software support for global address spaces
- OS and runtime system enhancements for attached and integrated accelerators
- Interactions among the OS, runtime, compiler, middleware, and tools
- Parallel/networked file system integration with the OS and runtime
- Resource management
- Runtime and OS management of complex memory hierarchies
- System software strategies for controlling energy and temperature
- Support for fault tolerance and resilience
- Virtualization and virtual machines
Preparing Your Submission
A paper submission has three components: the paper itself, an Artifact Description Appendix (AD), and an Artifact Evaluation Appendix (AE). AD/AE Appendices will now be auto-generated from author responses to a standard form, embedded in the SC online submission system. The Artifact Description Appendix, or indication that there is no Artifact, is mandatory. The Artifact Evaluation is optional.
Papers that have not previously been published in peer-reviewed venues are eligible for submission to SC. For example, papers pre-posted to arXiv, institutional repositories, and personal websites (but no other peer-reviewed venues) remain eligible for SC submission. Papers that were published in a workshop are eligible if they have been substantially enhanced (i.e. 30% new material).
Submissions are limited to 10 pages, excluding the bibliography, using the ACM SIG proceedings template, with line numbering enabled to help with review. In LaTeX, this implies a document class of:
AD and AE appendices are automatically generated and do not count against the 10 pages.
Authors must indicate a primary track from the ten choices on the submissions form and are strongly encouraged to indicate a secondary track.
Where to Submit
We believe that reproducible science is essential, and that SC is a leader in this effort. AD/AE Appendices will now be auto-generated from author responses to a standard form, embedded in the SC online submission system. The Artifact Description Appendix, or indication that there is no Artifact, is mandatory. The Artifact Evaluation is optional.
Learn more about the Reproducibility Initiative.
Papers are peer-reviewed by a committee of experts. Each paper will have at least three reviewers.
The peer review is a double-blind process. Reviewers do not have access to the names of authors. While Papers Committee members are named on the SC19 Planning Committee page, the names of the individuals reviewing each paper are not made available to the paper authors.
Review, Response, Revision
From an author’s perspective, the following are the key steps:
- Authors submit a title, abstract, and other metadata.
- Authors submit their full paper and complete an AD/AE form describing their computational artifacts (or lack of computational artifacts) and, optionally, text discussing how they evaluated their computational results.
- Authors receive an initial set of reviews of their paper.
- Authors have an opportunity to revise their paper and prepare an accompanying response to the reviewers.
- Author revisions and accompanying response will be available to the reviewers at least a week before the Papers Committee meeting.
- Authors are notified of their paper’s status: Accept, Reject, or Major Revisions Required.
- In the case of Major Revisions Required, authors prepare a major revision for re-review.
- After this review, the paper will be either accepted or rejected.
- Authors of accepted papers prepare the final version of their paper.
Conflict of Interest
Please review the SC Conference Conflict of Interest guidelines before submitting your paper.
Please see the ACM guidelines on identifying plagiarism. Authors should submit new, original work that represents a significant advance from even their own prior publications.
If your Paper is selected, at least one author must register for the Technical Program in order to attend the SC Conference and present the paper.
Finalizing Accepted Papers
Upon acceptance, all Papers (including those that goes through major revisions) will be listed in the online SC Schedule. We expect this to happen at the end of August 2019.
Papers are archived in the ACM Digital Library and IEEE Xplore; members of SIGHPC or subscribers to the archives may access the full papers without charge. This publication contains the full text of all Papers presented at the SC Conference.
Schedule and Location
Papers presentations will be held Tuesday–Thursday, November 19–21, 2019. Papers sessions are 30 minutes. Day, time, and location for each paper session will be published in the online SC Schedule by August.
Papers are assigned either a classroom or a theater room equipped with standard AV facilities:
- Microphone and podium
- Wireless lapel microphone or wireless handheld microphone
- Projection screen
Best Paper (BP) and Best Student Paper (BSP) nominations are made during the review process and are highlighted in the online SC schedule. BP and BSP winners are selected at the conference by a committee who attends the corresponding paper presentations, and winners are announced at the Thursday Awards ceremony.
Questions about Paper Submissions?