Speaker
Description
The usage of Deep Neural Networks (DNNs) as multi-classifiers is widespread in modern HEP analyses. In standard categorisation methods, the high-dimensional output of the DNN is often reduced to a one-dimensional distribution by exclusively passing the information about the highest class score to the statistical inference method. Correlations to other classes are hereby omitted.
Moreover, in common statistical inference tools, the classification values need to be binned, which relies on the researcher's expertise and is often non-trivial. To overcome the challenge of binning multiple dimensions and preserving the correlations of the event-related classification information, we perform K-means clustering on the high-dimensional DNN output to create bins without marginalising any axes.
We evaluate our method in the context of a simulated cross section measurement at the CMS experiment, showing an increased expected sensitivity over the standard binning approach.
Significance
DNNs have shown to be an indispensable tool in searches for rare processes at the LHC. The sensitivity of an analysis, however, can suffer under the search for an optimal binning and often results in a substantial reduction of the high-dimensional DNN output. This study utilises the full DNN prediction in order to increase the overall sensitivity of a HEP analysis, which would be applicable in a wide range of use cases of multi-classifiers.
Experiment context, if any | Simulations of CMS Experiment |
---|