Workshop: Workshop on Exascale MPI (ExaMPI)
Event TypeWorkshop
Registration Categories
Parallel Programming Languages, Libraries, and Models
TimeSunday, 17 November 20199am - 5:30pm
DescriptionThe aim of workshop is to bring together researchers and developers to present and discuss innovative algorithms and concepts in the Message Passing programming model and to create a forum for open and potentially controversial discussions on the future of MPI in the exascale era. Possible workshop topics include innovative algorithms for collective operations, extensions to MPI, including datacentric models, scheduling/routing to avoid network congestion, “fault-tolerant” communication, interoperability of MPI and PGAS models, integration of task-parallel models in MPI, and use of MPI in large scale simulations.
9:00am - 9:05amWorkshop on Exascale MPI (ExaMPI)
9:05am - 10:00amExaMPI Keynote
10:00am - 10:30amExaMPI Morning Break
10:30am - 11:00amMultirate: A Flexible MPI Benchmark for Fast Assessment of Multithreaded Communication Performance
11:00am - 11:30amImpacts of Multi-GPU MPI Collective Communications on Large FFT Computation
11:30am - 12:00pmNode-Aware Improvements to Allreduce
12:00pm - 12:30pmAccelerating the Global Arrays ComEx Runtime Using Multiple Progress Ranks
12:30pm - 2:00pmExaMPI Lunch Break
2:00pm - 2:30pmRDMA-Based Library for Collective Operations in MPI
2:30pm - 3:00pmUsing MPI-3 RMA for Active Messages
3:00pm - 3:30pmExaMPI Afternoon Break
3:30pm - 3:55pmExaMPI Invited Talk #1 - Evaluating MPI Message Size Summary Statistics
3:55pm - 4:20pmExaMPi Invited Talk #2 - The Case for Modular Generalizable Proxy Applications for Systems Software Research
4:20pm - 5:30pmExaMPI Vendor Panel - MPI for Exascale: Challenges and Opportunities
Back To Top Button