HPC Is Right Now: Q&A with the Esteemed Amanda Randles on HPC and Biomedical Research

Dr. Amanda Randles
Dr. Amanda Randles, lead of Randles Lab at Duke University.

When a person who has been lauded as an “outstanding young computer professional of the year” is just as adept in biomedical engineering, math, and applied physics, you have the makings of a unique achiever with transformational potential.

Actually, you have Amanda Randles.

Randles, currently the Alfred Winborne and Victoria Stover Mordecai Assistant Professor of Biomedical Sciences and an assistant professor of biomedical engineering, computer science, and mathematics at Duke University as well as a Duke Cancer Institute member, has racked up multiple major awards for her revolutionary research involving HPC and biomedical simulation. Among her many accolades, including the 2017 IEEE-CS Technical Consortium on High Performance Computing (TCHPC) Award for Excellence for Early Career Researchers in HPC and 2017 ACM Grace Murray Hopper Award, Randles has made quite a mark at the SC conference, earning ACM–IEEE-CS George Michael HPC Fellowships in 2010 and 2012 as a student and named a finalist for the Gordon Bell Prize in 2010 and 2015.

Among her best-known efforts, Randles has been the principal developer of HARVEY, a massively parallel computational fluid dynamics code that can accurately model red blood cell movement in the human arterial system. Notably, HARVEY was cited as the driver for both her Gordon Bell nod in 2015 and Grace Hopper award recognition in 2017. As lead of the Randles Lab at Duke University, her team and research colleagues have extended the code’s application areas and are scaling it on increasingly larger heterogeneous computing systems. Initially, HARVEY was “test driven” on the full Sequoia supercomputer (1.6 PB compute memory; ~20 PFLOPS) at Lawrence Livermore National Laboratory. Recently, Randles co-authored a study with Duke’s Biomedical Engineering department and Oak Ridge National Laboratory. Together, they assessed HARVEY’s performance portability on different heterogeneous architectures built with GPUs or field-programmable gate arrays using hybrid Message Passing Interface (MPI)-based programming models.1

Randles continues to expand the possibilities for how large-scale parallel computing applications and simulations can impact the ways physicians and experimentalists view disease development and progression, including cancer metastasis and vascular diseases. Ultimately, these insights may translate to improved patient outcomes. Now, she offers perspectives about what drew her to employing HPC for complex biomedical issues, her views on HPC’s future, and which of her many awards makes her the most proud.


Q: You are highly decorated for your achievements in designing large-scale parallel applications aimed at resolving biomedical questions. What set you on this path, especially as your advanced degrees are in the physics (doctorate) and computer science (masters) areas?


Randles: I’ve always been interested in the application of computational methods to biomedical problems. In my undergraduate training, I worked in both a molecular genetics and microbiology lab and biophotonics lab for several years. This experience was a first exposure to the application of physics principles and computational methods to targeting bio-related problems. After graduating, I was lucky to get a job at IBM working on the Blue Gene supercomputer. As part of my role, I helped port and optimize biological and chemistry applications for the system. This work got me more excited about what we could do with that scale of HPC resources, so I went back to graduate school to learn more about the application side. I joined Prof. Kaxiras’ Lab [Kaxiras Research Group at Harvard University], where he was coming from physics fundamentals to study biological phenomena.


Q: How does your Lab work with physicians or domain scientists to build such specific biomedical-related computational tools? Do doctors bring you problems seeking computing solutions, or have you found those niches, such as using CFD to model red blood cell flow in arteries, along the way?


Randles: It has been a bit of a mix. I have several longstanding collaborations with physicians from Brigham and Women’s Hospital and UCSD [University of California, San Diego] that I initially reached out to. We held a workshop at Lawrence Livermore National Laboratory back in 2015 where physicians from different disciplines were invited alongside experts in medical imaging, high-performance computing, and visualization. That workshop served as a great launching point for many of our ongoing projects. After coming to Duke, I have been lucky to have the hospital in short walking distance of my office. At Duke, it has been a mix of my reaching out, and the doctors reaching out to me. I’ve even been asked to give several talks at Grand Rounds for different disciplines to help seek out collaboration opportunities. There is a lot of support and opportunity for collaborations between the Engineering School and Medical School here. We’ve also had a medical student complete a two-year research fellowship in our lab. We’ve been fortunate to be able to collaborate so easily with the medical school.


Q: What innovations in HPC do you expect will have the most impact, either positive or negative, on the biomedical engineering field? Is there an aspect of HPC that you really want to see evolve and why?


Randles: I think we’re starting to see a wider array of options for accessing HPC resources, making it easier for biomedical researchers to use them. If we want some of this research to move into clinical practice, moving away from the requirement that the hospital needs to buy a physical cluster can have a large impact. Improvements to cloud security, the ability to use it for clinical data, and increasing availability of tightly coupled systems will help increase adoption.


Q: With HARVEY, you have built highly useful software that requires sustainability. How are HPC system advances, especially as they scale toward exascale and beyond, impacting its upkeep?


Randles: We are constantly thinking about the next step and how to set the code up to be ready for future systems. From the beginning, we have been designing HARVEY from the ground up to run on leadership class systems. We work closely with collaborators at several national laboratories to ensure we’re taking the right steps to prepare for next-generation architectures. Right now, we’re part of the Aurora Early Science Program, which has been extremely helpful in going through what we should think about and how to adapt the code. Even though we’re in an academic setting, preparing for sustainability has required serious efforts in software engineering with systematic, multi-tiered version control and automated continuous integration.


Q: Of the many “hats” you wear for your research—professor, investigator, computer scientist, physicist, mathematician, engineer—which one is the most comfortable, i.e., either where you glean the most satisfaction or find the most challenging? Why?


Randles: I think it’s a combination. I am very passionate about our research and getting to use the biggest supercomputers in the world to address biomedical questions. We’re getting to target questions that have never been tractable before and use cutting-edge technology to get there. I get excited both about pushing the limits of our code and interacting with clinicians to answer relevant questions to improve healthcare. In the last few years, it’s been incredibly rewarding to see my students getting excited to work on new systems and even start to direct their own research agendas.


Q: Again, you have had much success in your career with the awards to prove it. Among them, which one (or two) stands out in your mind as a true hallmark moment, where you realized your peers understood and valued your work?


Randles: The ACM Grace Hopper Award meant a lot because it came from the broader computer science community and highlighted an appreciation for applied computational science.


Q: The SC19 theme is ‘HPC is Now’. What does that concept mean to you?


Randles: While we’re always looking forward to the next systems, there are incredible HPC resources available today enabling real discoveries. In our lab, we have a mix of research projects looking to push forward new HPC approaches and algorithms, but, importantly, they also use the resources available now to answer questions that can improve diagnosis and treatment of disease today, as well as provide insight into the underlying mechanisms driving disease progression. We aren’t waiting for new hardware or innovation in architecture to be able to run simulations that can have a real clinical impact.


1 Lee S, J Gounley, A Randles, and JS Vetter (2019). Performance portability study for massively parallel computational fluid dynamics application on scalable heterogeneous architectures. Journal of Parallel and Distributed Computing 129:1-13. DOI: 10.1016/j.jpdc.2019.02.005.


Charity Plata, SC19 Communications Team Writer (Brookhaven National Laboratory)

SC19 logo

Charity Plata provides comprehensive editorial oversight to Brookhaven National Laboratory’s Computational Science Initiative. Her writing and editing career spans diverse industries, including publishing, architecture, civil engineering, and professional sports. Prior to joining Brookhaven Lab in 2018, she worked at Pacific Northwest National Laboratory primarily within the Advanced Computing, Mathematics and Data Division.

Back To Top Button