The High Luminosity LHC (HL-LHC) will operate at a collider instantaneous luminosity up to 7.5x1034 cm-2s-1, approximately five times larger than the limit reached during the present LHC run. For the CMS experiment, this corresponds to an average pile-up of up to 200 events per crossing in the interaction region of the detector, and an integrated luminosity of up to 4000 fb-1 over 10 years of data taking is expected to be delivered. Such level of machine performances will allow to extend searches for new physics and to perform stringent tests of the Standard Model, such as precision measurements of the Higgs Boson couplings.
The CMS detector and its trigger system will need to undergo a substantial upgrade, called “Phase2 Upgrade”, affecting all subdetectors: tracking, electromagnetic and hadronic calorimeters, muon detectors, trigger and readout systems. The overall Software and Computing systems will need to be completely revisited, too: given the higher complexity of the event reconstruction, estimates from the current CMS software and using simulations for the upgrade phase indicate that the computing challenge is overall 65-200 times worse than in the current run (Run-2).
The complexity and the time span of this challenge - together with the recent ramp-up in the evolution curve of selected advanced computing techniques - like machine learning and deep learning (ML/DL) approaches - invites to explore some of them and to implement actual prototypes that test and verify their feasibility and eventual adoption. The work done so far towards ML/DL-based muon trigger algorithms for the Phase2 upgrade of the CMS detector will be presented and discussed.