Speaker
Description
Beam tuning in particle accelerators is a complex task, especially when physical modeling is impractical due to the lack of complete beam diagnostics. Traditional methods often rely on iterative manual tuning by operators, which can be inefficient. Reinforcement learning (RL) algorithms offer a promising alternative for automating this process. In this work, we demonstrate the successful application of RL-based policies to beam tuning for the High-Intensity Proton Injector (HIPI), where traditional physical modeling was not feasible.
The policy is trained using a surrogate model, which was built from online data collection. Our results show that the surrogate model significantly enhances the training efficiency, reducing the time required for the RL agent to learn an effective control policy. Moreover, the policy demonstrated robust performance in real-world testing, achieving approximately 90% beam transmission within minutes. This approach provides a practical solution for environments where traditional physical models are unavailable, showcasing the potential of RL in optimizing accelerator operations.