Speaker
Description
We present a machine-learning based strategy to detect data departures from a given reference model, with no prior bias on the nature of the new physics responsible for the discrepancy. The main idea behind this method is to build the likelihood-ratio hypothesis test by directly translating the problem of maximizing a likelihood-ratio into the minimization of a loss function. A neural network compares observations with an auxiliary set of reference-distributed events, possibly obtained with a Monte Carlo event generator. The virtues of neural networks as unbiased function approximants make them particularly suited for this task. The algorithm returns a p-value, which measures the compatibility of the reference model with the data. It also identifies the most discrepant phase-space region of the data set, to be selected for further investigation.
The most interesting potential applications are new physics searches in high energy physics, for which our work provides an end-to-end signal-model-independent analysis strategy.
Besides that, our approach could also be used to compare the theoretical predictions of different Monte Carlo event generators, or for data validation algorithms.
In this talk, after outlining the conceptual foundations of the algorithm [1], we explain how to apply it to a multivariate problem [2] and how to extend it to deal with uncertainties on the reference model predictions by studying the impact of two typical sources of experimental uncertainties in a two-body final state analysis at the LHC.
[1] https://link.aps.org/doi/10.1103/PhysRevD.99.015014
[2] https://doi.org/10.1140/epjc/s10052-021-08853-y
Significance
We present for the first time how to include a treatment for systematic uncertainties within our method and prove that it works on a two-body final state analysis at the LHC.
References
https://link.aps.org/doi/10.1103/PhysRevD.99.015014
https://doi.org/10.1140/epjc/s10052-021-08853-y
Speaker time zone | Compatible with Europe |
---|