IML Machine Learning Working Group

Europe/Zurich
6/2-024 - BE Auditorium Meyrin (CERN)

6/2-024 - BE Auditorium Meyrin

CERN

114
Show room on map
Description

Open topic

Zoom Meeting ID
96543252431
Host
Simon Akar
Alternative hosts
Riccardo Torre, Fabio Catalano
Useful links
Join via phone
Zoom URL
    • 15:00 15:05
      News 5m
      Speakers: Anja Butter (Centre National de la Recherche Scientifique (FR)), Fabio Catalano (University and INFN Torino (IT)), Julian Garcia Pardinas (CERN), Lorenzo Moneta (CERN), Michael Kagan (SLAC National Accelerator Laboratory (US)), Dr Pietro Vischia (Universidad de Oviedo and Instituto de Ciencias y Tecnologías Espaciales de Asturias (ICTEA)), Simon Akar (University of Cincinnati (US)), Stefano Carrazza (CERN)
    • 15:05 15:30
      Generative transformers and how to evaluate them 25m

      With the increase in luminosity and detector granularity, simulation will be a significant computational challenge in the HL-LHC. To tackle this, we present developments in machine learning (ML) graph- [1, 2] and attention-based [3] models for generating jets at the LHC using sparse and efficient point cloud representations of our data, which offer a three-orders-of-magnitude improvement in latency compared to full (Geant4) simulation. We also present studies on metrics for validating ML-based simulations, including the novel Frechet and kernel physics distances, which are found to be highly sensitive to typical mismodelling by ML generative models [3]
      [1] ML4PS @ NeurIPS 2020, https://arxiv.org/abs/2012.00173
      [2] NeurIPS 2021, https://arxiv.org/abs/2106.11535
      [3] 2022, https://arxiv.org/abs/2211.10295

      Speaker: Raghav Kansal (Univ. of California San Diego (US))
    • 15:30 15:35
      Question time 5m
    • 15:35 16:00
      Anomaly detection and self-supervised representation learning 25m

      Autoencoders are an effective analysis tool for model-agnostic searches at the LHC. Unfortunately, it is known that their OOD detection performance is not robust and heavily depends on the compressibility of the signals. Even if a neural network can learn the physical content of the low-level data, the gain in sensitivity on features of interest can be hindered by redundant information already explainable in terms of known physics. This poses the problem of constructing a representation space where known physical symmetries are manifest and discriminating features are retained. I’ll present ideas in both directions. I’ll introduce a Normalized Auto-Encoder (NAE), a robust OOD detector based on an energy model, and how a self-supervised contrastive learning training can produce optimized observables for jet tagging (JetCLR) and anomaly detection (AnomCLR).

      Speaker: Luigi Favaro (University of Heidelberg)
    • 16:00 16:05
      Question time 5m
    • 16:05 16:30
      Versatile Energy-Based Models for High Energy Physics 25m

      Energy-based models have the natural advantage of flexibility in the form of the energy function. Recently, energy-based models have achieved great success in modeling high-dimensional data in computer vision and natural language processing. In accordance with these signs of progress, we build a versatile energy-based model for High Energy Physics events at the Large Hadron Collider. This framework builds on a powerful generative model and describes higher-order inter-particle interactions. It suits different encoding architectures and builds on implicit generation. As for applicational aspects, it can serve as a powerful parameterized event generator, a generic anomalous signal detector, and an augmented event classifier.

      Speaker: Taoli Cheng (University of Montreal)
    • 16:30 16:35
      Question time 5m