Abstract: Natural sciences are increasingly reliant on large volumes of experimental data obtained from highly-automated scientific equipment to drive scientific discovery. While the instrumentation available to the scientific community enables individual researchers to acquire experimental data at spectacular rates, the decision making and experimental design process of the measurements is still largely driven by the individual researcher operating the instrument.
In recent years, some scientific areas, like X-Ray scattering and synchrotron infra-red microscopy, have started to shift towards autonomously executed experimentation, using tools from Artificial Intelligence and Machine Learning. Gaussian Process Regression (GPR) is a popular technique that enables the construction of a surrogate model required to drive an experiment in an autonomous fashion. GPRs have been successfully implemented at synchrotron radiation beam lines at the Advanced Light Source (ALS) and the National Synchrotron Light Source II (NSLS-II).
Given that a single measurement on state-of-the-art synchrotron equipment can be acquired in a fraction of a second, being able to obtain feedback at a similar time scale is essential. Traditionally, specific tasks in GPR, such as hyper-parameter tuning and covariance estimation, are compute-intensive, limiting the utility of GPR as an appropriate tool when near real-time feedback is an absolute necessity. In this paper we discuss, present and review computational strategies that allow the numerical acceleration of Gaussian Process analyses within a framework of autonomous sequential experiments. The results show that significant time savings can be achieved by taking advantage of a number of some rather well-established mathematical and computational approaches.