Speaker
Description
Particle accelerators, such as the CERN Linear Electron Accelerator for Research (CLEAR), play a critical role in various scientific fields.
Ensuring their operation is automatic, stable, and reproducible is vital for the scalability of future large-scale accelerator projects.
This paper presents an initial step toward autonomous control of the CLEAR beamline, beginning with a basic beam steering challenge and progressing to more complex issues, such as absolute alignment within quadrupoles, which are critical for CLEAR’s operational stability.
The proposed solution leverages Reinforcement Learning (RL) agents that learn in real-time using beam screens.
These strategy was used to optimize the sampling efficiency taking into consideration the highly invasive and expensive nature of data collection in particle accelerator environments.
The goal is to achieve single-shot optimization that can be directly applied in real operational scenarios, potentially eliminating the need for further manual adjustments.
The results are highly promising, demonstrating that, with only a few hours of training, it is possible to achieve single-step corrections of the CLEAR beamline’s experimental section.
This success has motivated the operational team to further explore and develop this approach.