Towards Lattice-Agnostic Reinforcement Learning Agents for Transverse Beam Tuning

Not scheduled
20m
80/1-001 - Globe of Science and Innovation - 1st Floor (CERN)

80/1-001 - Globe of Science and Innovation - 1st Floor

CERN

Esplanade des Particules 1, 1211 Meyrin, Switzerland
60
Show room on map
Poster Optimisation and Control Poster session

Speakers

Chenran Xu Jan Kaiser

Description

Reinforcement learning (RL) has been successfully applied to various online tuning tasks, often outperforming traditional optimization methods. However, model-free RL algorithms typically require a high number of samples, with training processes often involving millions of interactions. As this time-consuming process needs to be repeated to train RL-based controllers for each new task, it poses a significant barrier to their broader application in online tuning tasks. In this work, we address this challenge by extending domain randomization to train general lattice-agnostic policies. We focus on a common task in linear accelerators: tuning the transverse positions and sizes of electron bunches by controlling the strengths of quadrupole and corrector magnets. During training, the agent interacts with environments where the magnet positions are randomized, enhancing the robustness of the trained policy. Preliminary results demonstrate that this approach enables policies to generalize and solve the task on different lattice sections without the need for additional training, indicating the potential for developing transferrable RL agents. This study represents an initial step toward rapid RL deployment and the creation of lattice-agnostic RL controllers for accelerator systems.

Authors

Presentation materials

There are no materials yet.