Kyle Stuart Cranmer (New York University (US))
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the data. The MEM approach tries to directly compute the likelihood by approximating the detector. This approach is similar to ABC in that it provides parameter inference in the “likelihood free” setting by using a simulator, but it does not require one to use Bayesian inference and it cleanly separates issues of statistical calibration from the approximations that are being made. The method is much faster to evaluate than the MEM approach and does not require a simplified detector description. Furthermore, it is a generalization of the LHC experiments current use of multivariate classifiers for searches and integrates well into our existing statistical procedures.