Speaker
Description
The baseline track finding algorithms adopted in the LHC experiments are based on combinatorial track following techniques, where the seed number scales non-linearly with the number of hits. The corresponding CPU time increase, close to cubical, creates huge and ever-increasing demand for computing power. This is particularly problematic for the silicon tracking detectors, where the hit occupancy is the largest. This drawback of the current methods motivates investigating novel approaches for track finding, in particular, those based on the machine learning (ML) techniques. With future detector upgrades and increased luminosity, it is essential that computing resource use is reduced, whilst maintaining the ability to reconstruct tracks with minimal loss in efficiency.
We discuss the work that has been done to optimize the High Level Trigger Inner Detector (HLT ID) track seeding software for ATLAS Run-3 and beyond, in order to reduce the number of fake seeds. An ML-based algorithm has been developed to predict if a pair of hits belong to the same track given input hit features, such as silicon cluster width and track inclination angle. The implementation of the trained predictor in the form of Look-Up Tables is presented, and the resulting full-scan ID tracking efficiency, as well as speed-up factor obtained using simulated data is also discussed.
The results discussed here show the benefit of applying a ML-based algorithm, as well as utilizing the results from the pattern recognition training for faster implementation in track seeding and hence reduce the effects of combinatorics. The benefits of such an approach could lead to vast savings in CPU needs over the next 20 years of life of the LHC, as well as benefiting other future collider experiments.
Speaker time zone | No preference |
---|