Speaker
Description
The upgrade of the LHC accelerator for high-luminosity will allow CERN's general purpose detectors, ATLAS and CMS, to take far more data than they do currently, with instantaneous luminosity of up to $7.5x10^{34}\mathrm{cm}^{-2}\mathrm{s}^{-1}$ and pile-up of 200 events. In total HL-LHC targets $3\mathrm{ab}^{-1}$ of data. To best exploit this physics potential, trigger rates will rise by up to an order of magnitude, compared to LHC Runs 2 and 3.
To support this ten-fold increase in HL-LHC data rate to offline it will be necessary to generate many more simulated events to match these trigger rates. All of this additional computing must happen inside a flat budget envelope, implying that detector simulation for HL-LHC must become much faster than today’s performance. In this paper we outline the 3-pronged strategy for achieving the requisite performance. First, code modernisation and simplification inside Geant4, the main workhorse for the LHC experiments, can improve the throughput on modern CPUs, avoiding constant churn in data and instruction caches. In this respect the lessons from the GeantV R&D project are extremely valuable and will be discussed. Second, the use of fast simulation techniques, replacing traditional particle transport with parametric detector responses will need to be more widely used. Here, research into what techniques are generally applicable across detector types (particularly calorimeters) is very active, in addition to investigating the best way to utilise machine learning approaches and integrate them into Geant4. Finally, the use of non-CPU devices, which could offer new ways to approach detector simulation taking advantage of very different hardware, such as GPUs, and could be a way to exploit next generation systems that offer different computing opportunities.
We will present preliminary results from all three of these areas and discuss how all of them will be probably necessary to meet the challenge of HL-LHC detector simulation.