Speaker
Description
High-precision calculations are an indispensable ingredient to the success of the LHC physics programme, yet their poor computing efficiency has been a growing cause for concern, threatening to become a paralysing bottleneck in the coming years. We present solutions to eliminate the apprehension by focussing on two major components of general purpose Monte Carlo event generators: The evaluation of parton-distribution functions along with the generation of perturbative matrix elements. We show that for the cost-driving event samples employed by the ATLAS experiment to model omnipresent irreducible Standard Model backgrounds, such as weak boson+jets as well as top-quark-pair production, these components dominate the overall run time by up to 80%. We demonstrate that a reduction of the computing footprint of LHAPDF and SHERPA by factors of around 50 can be achieved for multi-leg NLO event generation, thereby smashing one of the major milestones set by the HSF event generator working group whilst paving the way towards affordable state-of-the-art event simulation in the HL-LHC era.
References
in preparation
Significance
This presentation covers a new targeted effort enabled by the SWIFT-HEP project, bringing together experimentalists and MC developers to greatly improve the computational efficiency of multi-leg NLO calculations, following a dedicated CPU profiling of these setups - typically the most expensive ones produced by the LHC experiments. The resulting improvements achieve a significant milestone set by the HSF generators working group and will help the experiments stay within the projected budget in the coming years by making high-precision calculations more affordable as we head into the high-luminosity phase of the LHC.
Experiment context, if any | relevant for the LHC experiments, mainly ATLAS and CMS (abstract does not require involvement of the experiments' publication boards) |
---|