Speaker
Description
The publication of full likelihood functions (LFs) of LHC results is vital for a long-lasting and profitable legacy of the LHC. Although major steps have been put forward in this direction, the systematic publication of LFs remains a big challenge in High Energy Physics (HEP) as such distributions are usually quite complex and high-dimensional. Thus, we propose to describe LFs with Normalizing Flows (NFs); a powerful class of expressive generative networks that provide density estimation by construction. In this talk, we show that NFs are able to accurately model the complex high-dimensional LFs found in HEP, in some cases even with relatively small training samples. This approach opens the possibility of compact and efficient characterisations of the LFs derived from LHC searches, SM measurements, phenomenological studies, etc.
Significance
The systematic publication of full likelihood functions (LFs) of LHC results is a hot topic in HEP. This would allow for future new statisctical interpretations, for more accurate re-interpretations of the results in the context of different theoretical models, etc. However, there is not a stong concensus on how this LFs should be published, specially since they are often high-dimensional complex distributions. Thus, we present a novel approach for modeling and sharing this LFs using Normalizing Flows. We believe that NFs are very suitable for this task and could be systematically used.
Experiment context, if any | LHC experiments, specially ATLAS and CMS. Usage could be extended to other experiments. |
---|