Speaker
Description
Detailed event simulation at the LHC is taking a large fraction of computing budget. CMS developed an end-to-end ML based simulation framework, called FlashSim, that can speed up the time for production of analysis samples of several orders of magnitude with a limited loss of accuracy. We show how this approach achieves a high degree of accuracy, not just on basic kinematics but on the complex and highly correlated physical and tagging variables included in the CMS common analysis-level format, the NANOAOD. We prove that this approach can generalize to processes not seen during training. Furthermore, we discuss and propose solutions to address the simulation of objects coming from multiple physical sources or originating from pileup. Finally, we present a comparison with full simulation samples for some simplified analysis benchmarks, as well as how we can use the CMS Remote Analysis Builder (CRAB) to submit simulation of large samples to the LHC Computing Grid. The simulation takes as input relevant generator-level information, e.g. from PYTHIA, while outputs are directly produced in the NANOAOD format. The underlying models being used are state-of-the-art continuous FLows, trained through Flow Matching.
With this work, we aim to demonstrate that this end-to-end approach to simulation is capable of meeting experimental demands, both in the short term and in view of HL-LHC; and update the LHC community about recent developments.
Would you like to be considered for an oral presentation? | Yes |
---|