Speaker
Description
Di-Higgs production at the LHC associated with missing transverse energy is explored in the context of simplified models that generically parameterize a large class of models with heavy scalars and dark matter candidates. Our aim is to figure out the improvement capability of machine-learning tools over traditional cut-based analyses. In particular, boosted decision trees and neural networks are implemented in order to determine the parameter space that can be tested at the LHC demanding four b-jets and large missing energy in the final state. We present a performance comparison between both machine-learning algorithms, based on the maximum significance reached, by feeding them with different sets of kinematic features corresponding to the LHC at a center-of-mass energy of 14 TeV. Both algorithms present very similar performances and substantially improve traditional analyses, being sensitive to most of the parameter space considered for a total integrated luminosity of 1/ab, with significances at the evidence level, and even at the discovery level, depending on the masses of the new heavy scalars. A more conservative approach with systematic uncertainties on the background of 30% has also been contemplated, again providing very promising significances.