IML Machine Learning Working Group

304/1-001 (CERN)



Show room on map

Topic: Optimal transport and invertible algorithms

IML Machine Learning Working Group
Zoom Meeting ID
Simon Akar
Alternative hosts
Riccardo Torre, Fabio Catalano
Useful links
Join via phone
Zoom URL
    • 3:00 PM 3:05 PM
      News 5m
      Speakers: Anja Butter, Fabio Catalano (University and INFN Torino (IT)), Lorenzo Moneta (CERN), Michael Kagan (SLAC National Accelerator Laboratory (US)), Dr Pietro Vischia (Universite Catholique de Louvain (UCL) (BE)), Simon Akar (University of Cincinnati (US)), Stefano Carrazza (CERN)
    • 3:05 PM 3:30 PM
      Event Generation and Density Estimation with Surjective Normalizing Flows 25m

      Normalizing flows are a class of generative models that enable exact likelihood evaluation. While these models have already found various applications in particle physics, normalizing flows are not flexible enough to model many of the peripheral features of collision events. Using the framework of Nielsen et al. (2020), we introduce several surjective and stochastic transform layers to a baseline normalizing flow to improve modelling of permutation symmetry, varying dimensionality and discrete features, which are all commonly encountered in particle physics events. We assess their efficacy in the context of the generation of a matrix element-level process, and in the context of anomaly detection in detector-level LHC events.

      Speaker: Dr Rob Verheyen
    • 3:30 PM 3:35 PM
      Question time 5m
    • 3:35 PM 4:00 PM
      The Unsupervised DNN Likelihood: Learning Likelihoods with Normalzing Flows 25m

      Full statistical models encapsulate the complete information of an experimental result, including the likelihood function given observed data. Their proper publication is of vital importance for a long lasting legacy of HEP experiments. However, statistical models are often high-dimensional complex functions, which are not straightforward to parametrize. In the context of LHC results, even the full likelihoods are composed by a number of parameters of interest and nuisance parameters that can easily be of the order of hundreds. Thus, we proposed to describe them with Normalizing Flows (NFs), a modern type of generative networks that explicitly learn the probability density distribution. As a proof of concept we focused on two likelihoods from global fits to SM observables and a likelihood of a NP-like search, obtaining great results for all of them. Furthermore, to make sure that we can systematically make use of NFs for likelihood learning, we performed a general study where we tested several types of flows against different types of distributions, with scaling complexity and dimensionality. The study showed that, in particular, the so-called neural spline flows can effciently describe even the most complex probability density functions we implemented. Furthermore, we hope that our proposal can be useful not only for publishing likelihoods from LHC analyses, but also those from phenomenological studies or from other types of experiments.

      Speaker: Dr Humberto Reyes-González (University of Genoa)
    • 4:00 PM 4:05 PM
      Question time 5m
    • 4:05 PM 4:30 PM
      MadNIS - Neural Multi-Channel Importance Sampling 25m

      Theory predictions for the LHC require precise numerical phase-space integration and generation of unweighted events. We combine machine-learned multi-channel weights with an normalizing flow for importance sampling, to improve classical methods for numerical integration. We develop an efficient bi-directional training for the invertible network, combining online and buffered training for potentially expensive integrands. We illustrate our method for the Drell-Yan process with an additional narrow resonance.

      Speaker: Ramon Winterhalder (UC Louvain)
    • 4:30 PM 4:35 PM
      Question time 5m
    • 4:35 PM 5:00 PM
      Model independent measurements of standard model cross sections with domain adaptation 25m

      With the ever growing amount of data collected by the ATLAS and CMS experiments at the CERN LHC, fiducial and differential measurements of the Higgs boson production cross section have become important tools to test the standard model predictions with an unprecedented level of precision, as well as seeking deviations that can manifest the presence of physics beyond the standard model. These measurements are in general designed for being easily comparable to any present or future theoretical prediction, and to achieve this goal it is important to keep the model dependence to a minimum. Nevertheless, the reduction of the model dependence usually comes at the expense of the measurement precision, preventing to exploit the full potential of the signal extraction procedure. In this paper a novel methodology based on the machine learning concept of domain adaptation is proposed, which allows using a complex deep neural network in the signal extraction procedure while ensuring a minimal dependence of the measurements on the theoretical modelling of the signal.

      Speaker: Benedetta Camaiani (Universita e INFN, Firenze (IT))
    • 5:00 PM 5:05 PM
      Question time 5m