15–18 Oct 2024
Purdue University
America/Indiana/Indianapolis timezone

Reinforcement learning for charged particle tracking

Not scheduled
20m
Steward Center 306 (Third floor) (Purdue University)

Steward Center 306 (Third floor)

Purdue University

128 Memorial Mall Dr, West Lafayette, IN 47907
Poster

Speaker

Liv Helen Vage (Princeton University (US))

Description

Upgrades to the CMS experiment will see the average pileup go from 50 to 140 and eventually 200. With current algorithms, this would mean that almost 50% of the High Level Trigger time budget would be spent on particle track reconstruction. Many ML methods have been explored to address the challenge of slow particle tracking at high pileup. Reinforcement learning is presented as a novel method that can act as a one-shot track reconstructor with a large potential for parallelisation. It is shown that a small neural net could slot into a Kalman filter algorithm to reduce the combinatorics of the tracking problem. When a Kalman filter encounters several compatible hit candidates, tracks split into track candidates. These candidates must be propagated, and fake tracks have to be removed later. The RL agent instead learns to pick the best hit at each step, leading to potentially faster processing and a reduction in the number of fake tracks. Using the TrackML data, it is shown that the RL algorithm can choose between three hits at each step for tracks above 2 GeV with at least a 80 % accuracy. A similar performance is shown with Phase 2 CMS Monte Carlo data. While reinforcement learning is not yet competitive with other ML tracking methods, it could be a rapid and easily implemented addition to tracking algorithms.

Author

Liv Helen Vage (Princeton University (US))

Presentation materials

There are no materials yet.