Speaker
Description
Autonomous tuning of particle accelerators is an active and challenging research field with the goal of enabling advanced accelerator technologies and cutting-edge high-impact applications, such as physics discovery, cancer research and material sciences. A key challenge with autonomous accelerator tuning remains that the most capable algorithms require experts in optimisation and machine learning to implement them for every new tuning task. Here, we propose the use of large language models (LLMs) to tune particle accelerators. We demonstrate on a proof-of-principle example the ability of LLMs to tune an accelerator subsystem based on only a natural language prompt from the operator, and compare their performance to state-of-the-art optimisation algorithms, such as Bayesian optimisation and reinforcement learning-trained optimisation. As part of our study, we investigate how to correctly prompt LLMs, evaluating different prompts, where the task is phrased as both an accelerator tuning task and an application-agnostic optimisation task. In doing so, we also show how LLMs can perform numerical optimisation of a non-linear real-world objective. Considering the high computational costs incurred by LLMs, we further evaluate the environment and monetary impact that using them for accelerator tuning would have. Ultimately, this work represents another complex task that LLMs can solve and promises to help accelerate the deployment of autonomous tuning algorithms to day-to-day particle accelerator operations.