-
Nina Elmer9/10/24, 6:00โฏPMPoster
Estimating uncertainties is a fundamental aspect in every physics problem, no measurements or calculations comes without uncertainties. Hence it is crucial to consider the effect of training a neural network to problems in physics. I will present our work on amplitude regression, using loop amplitudes from LHC processes, as an example to examine the impact of different uncertainties on the...
Go to contribution page -
Samuele Grossi (Universitร degli studi di Genova & INFN sezione di Genova)9/10/24, 6:01โฏPMPoster
I will present and discuss several proposed metrics, based on integral probability measures, for the evaluation of generative models (and, more generally, for the comparison of different generators). Some of the metrics are particularly efficient to be computed in parallel, and show good performances. I will first compare the metrics on toy multivariate/multimodal distributions, and then focus...
Go to contribution page -
119. Limits to classification performance by relating Kullback-Leibler divergence to Cohenโs KappaStephen Watts9/10/24, 6:02โฏPMPoster
The performance of machine learning classification algorithms are evaluated by estimating metrics, often from the confusion matrix, using training data and cross-validation. However, these do not prove that the best possible performance has been achieved. Fundamental limits to error rates can be estimated using information distance measures. To this end, the confusion matrix has been...
Go to contribution page -
Emanuel Lorenz Pfeffer (KIT - Karlsruhe Institute of Technology (DE))9/10/24, 6:03โฏPM
Data analyses in the high-energy particle physics (HEP) community more and more often exploit advanced multivariate methods to separate signal from background processes. In this talk, a maximally unbiased, in-depth comparison of the graph neural network (GNN) architecture, which is of increasing popularity in the HEP community, with the already well-established technology of fully connected...
Go to contribution page -
Tom Runting (Imperial College (GB))9/10/24, 6:04โฏPMPoster
We present a method to accelerate Effective Field Theory reinterpretations using interpolated likelihoods. By employing Radial Basis Functions for interpolation and Gaussian Processes to strategically select interpolation points, we show that we can reduce the computational burden while maintaining accuracy. We apply this in the context of the Combined Higgs Boson measurement at CMS, a complex...
Go to contribution page -
Dr Marco Letizia9/10/24, 6:05โฏPMPoster
Traditional statistical methods are often not adequate to perform inclusive and signal-agnostic searches at modern collider experiments delivering large amounts of multivariate data. Machine learning provides a set of tools to enhance analyses in large scale regimes, but the adoption of these methodologies comes with new challenges, such as the lack of efficiency and robustness, and potential...
Go to contribution page -
Joseph Carmignani (University of Liverpool (GB))9/10/24, 6:06โฏPMPoster
โThe Multi-disciplinary Use Cases for Convergent Approaches to AI Explainability (MUCCA) project is pioneering efforts to enhance the transparency and interpretability of AI algorithms in complex scientific fields. This study focuses on the application of Explainable AI (XAI) in high-energy physics (HEP), utilising a range of machine learning (ML) methodologies, from classical boosted decision...
Go to contribution page -
Henry Aldridge (UCL)9/10/24, 6:07โฏPMPoster
Bayesian model selection provides a powerful framework for objectively comparing models directly from observed data, without reference to ground truth data. However, Bayesian model selection requires the computation of the marginal likelihood (model evidence), which is computationally challenging, prohibiting its use in many high-dimensional Bayesian inverse problems. With Bayesian imaging...
Go to contribution page -
Matt Price9/10/24, 6:08โฏPMPoster
Scattering transforms are a new type of summary statistics recently developed for the study of highly non-Gaussian processes, which have been shown to be very promising for astrophysical studies. In particular, they allow one to build generative models of complex non-linear fields from a limited amount of data, and have also been used as the basis of new statistical component separation...
Go to contribution page -
Giovanni De Crescenzo (University of Heidelberg)9/10/24, 6:09โฏPMPoster
We present an application of Simulation-Based Inference (SBI) in collider physics, aiming to constrain anomalous interactions beyond the Standard Model (SM). This is achieved by leveraging Neural Networks to learn otherwise intractable likelihood ratios. We explore methods to incorporate the underlying physics structure into the likelihood estimation process. Specifically, we compare two...
Go to contribution page -
Kiyam Lin9/10/24, 6:10โฏPMPoster
The standard approach to inference from cosmic large-scale structure data employs summary statistics that are compared to analytic models in a Gaussian likelihood with pre-computed covariance. To overcome many of the idealising assumptions that go into this type of explicit likelihood inference, and to take advantage of the high-fidelity wide field data that Euclid and LSST will provide, we...
Go to contribution page -
Harry Desmond (University of Portsmouth)9/10/24, 6:11โฏPMPoster
A key challenge in the field of AI is to make machine-assisted discovery interpretable, enabling it not only to uncover correlations but also to improve our physical understanding of the world. A nascent branch of machine learning โ Symbolic Regression (SR) โ aims to discover the optimal functional representations of datasets, producing perfectly interpretable outputs (equations) by...
Go to contribution page -
Lars Stietz (Hamburg University of Technology (DE))9/10/24, 6:12โฏPMPoster
Precision measurements at the Large Hadron Collider (LHC), such as the measurement of the top quark mass, are essential for advancing our understanding of fundamental particle physics. Profile likelihood fits have become the standard method to extract physical quantities and parameters from the measurements. These fits incorporate nuisance parameters to include systematic uncertainties. The...
Go to contribution page -
Benjamin Boyd (University of Cambridge)9/10/24, 6:13โฏPMPoster
Type Ia supernovae (SNe Ia) are thermonuclear exploding stars that can be used to put constraints on the nature of our universe. One challenge with population analyses of SNe Ia is Malmquist bias, where we preferentially observe the brighter SNe due to limitations of our telescopes. If untreated, this bias can propagate through to our posteriors on cosmological parameters. In this work, we...
Go to contribution page -
Deaglan Bartlett (Institut d'Astrophysique de Paris)9/10/24, 6:14โฏPMPoster
Neural networks are increasingly used to emulate complex simulations due to their speed and efficiency. Unfortunately, many ML algorithms, including (deep) neural networks, lack interpretability. If machines predict something humans do not understand, how can we check (and trust) the results? Even if we could identify potential mistakes, current methods lack effective mechanisms to correct...
Go to contribution page -
Monika Machalovรก9/10/24, 6:15โฏPMPoster
The aim of this work is to solve the problem of hadronic jet substructure recognition using classical subjettiness variables available in the parameterized detector simulation package, Delphes. Jets produced in simulated proton-proton collisions are identified as either originating from the decay of a top quark or a W boson and are used to reconstruct the mass of a hypothetical scalar...
Go to contribution page -
Alessio Spurio Mancini (Royal Holloway, University of London)9/10/24, 6:16โฏPMPoster
A new generation of astronomical surveys, such as the recently launched European Space Agencyโs Euclid mission, will soon deliver exquisite datasets with unparalleled amounts of cosmological information, poised to change our understanding of the Universe. However, analysing these datasets presents unprecedented statistical challenges. Multiple systematic effects need to be carefully accounted...
Go to contribution page -
Kai Lehman (LMU Munich)9/10/24, 6:17โฏPMPoster
How much cosmological information can we reliably extract from existing and upcoming large-scale structure observations? Many summary statistics fall short in describing the non-Gaussian nature of the late-time Universe and modelling uncertainties from baryonic physics. Using simulation based inference (SBI) with automatic data-compression from graph neural networks, we learn optimal summary...
Go to contribution page -
Sofia Palacios Schweitzer (Heidelberg), Tilman Plehn (Heidelberg University)9/10/24, 6:18โฏPMPoster
Many physics analyses at the LHC rely on algorithms to remove detector effect, commonly known as unfolding. Whereas classical methods only work with binned, one-dimensional data, Machine Learning promises to overcome both problems. Using a generative unfolding pipeline, we show how it can be build into an existing LHC analysis, designed to measure the top mass. We discuss the model-dependence...
Go to contribution page -
Noam Levi (Tel Aviv University)9/10/24, 6:19โฏPMPoster
We introduce Noise Injection Node Regularization (NINR), a method that injects structured noise into Deep Neural Networks (DNNs) during the training stage, resulting in an emergent regularizing effect. We present both theoretical and empirical evidence demonstrating substantial improvements in robustness against various test data perturbations for feed-forward DNNs trained under NINR. The...
Go to contribution page -
Yuval Yitzhak Frid (Tel Aviv University (IL))9/10/24, 6:20โฏPMPoster
Background modeling is one of the critical elements of searches for new physics at experiments at the Large Hadron Collider. In many searches, backgrounds are modeled using analytic functional forms. Finding an acceptable function can be complicated, inefficient and time-consuming. This poster presents a novel approach to estimating the underlying PDF of a 1D dataset of samples using Log...
Go to contribution page -
Nathan Huetsch (Heidelberg), Tilman Plehn (Heidelberg University)9/10/24, 6:21โฏPMPoster
The matrix element method is the LHC inference method of choice for limited statistics, as it allows for optimal use of available information. We present a dedicated machine learning framework, based on efficient phase-space integration, a learned acceptance and transfer function. It is based on a choice of INN and diffusion networks, and a transformer to solve jet combinatorics. We showcase...
Go to contribution page -
Tilman Plehn (Heidelberg University), Xavier Marino (Heidelberg)9/10/24, 6:22โฏPM
Recent innovations from machine learning allow for data unfolding, without binning and including correlations across many dimensions. We describe a set of known, upgraded, and new methods for ML-based unfolding. The performance of these approaches are evaluated on the same two datasets. We find that all techniques are capable of accurately reproducing the particle-level spectra across complex...
Go to contribution page -
Alessio Spurio Mancini (Department of Physics, Royal Holloway, University of London)Poster
A new generation of astronomical surveys, such as the recently launched European Space Agencyโs Euclid mission, will soon deliver exquisite datasets with unparalleled amounts of cosmological information, poised to change our understanding of the Universe. However, analysing these datasets presents unprecedented statistical challenges. Multiple systematic effects need to be carefully accounted...
Go to contribution page -
Benjamin Boyd (University of Cambridge)Poster
Type Ia supernovae (SNe Ia) are thermonuclear exploding stars that can be used to put constraints on the nature of our universe. One challenge with population analyses of SNe Ia is Malmquist bias, where we preferentially observe the brighter SNe due to limitations of our telescopes. If untreated, this bias can propagate through to our posteriors on cosmological parameters. In this work, we...
Go to contribution page -
Giovanni De Crescenzo (University of Heidelberg)Poster
We present an application of Simulation-Based Inference (SBI) in collider physics, aiming to constrain anomalous interactions beyond the Standard Model (SM). This is achieved by leveraging Neural Networks to learn otherwise intractable likelihood ratios. We explore methods to incorporate the underlying physics structure into the likelihood estimation process. Specifically, we compare two...
Go to contribution page -
Vรญctor Bresรณ Pla (University of Heidelberg)Poster
We present a detailed comparison of multiple interpolation methods to characterize the amplitude distribution of several Higgs boson production modes at the LHC. Apart from standard interpolation techniques, we develop a new approach based on the use of the Lorentz Geometric Algebra Transformer (L-GATr). L-GATr is an equivariant neural network that is able to encode Lorentz and permutation...
Go to contribution page -
Monika MachalovรกPoster
The aim of this work is to solve the problem of hadronic jet substructure recognition using classical subjettiness variables available in the parameterized detector simulation package, Delphes. Jets produced in simulated proton-proton collisions are identified as either originating from the decay of a top quark or a W boson and are used to reconstruct the mass of a hypothetical scalar...
Go to contribution page -
Rahul SrinivasanPoster
Using floZ, an improved Bayesian evidence (and its numerical uncertainty) estimation method based on normalizing flows, we estimate the Bayes factor in favor of gravitational wave overtones in the ringdown of the first detection. We find good agreement with nested sampling. Provided representative samples from the target posterior are available, our method is more robust to posterior...
Go to contribution page -
Rahul SrinivasanPoster
Using , an improved Bayesian evidence (and its numerical uncertainty) estimation method based on normalizing flows, we estimate the Bayes factor in favor of gravitational wave overtones in the ringdown of the first detection. We find good agreement with nested sampling. Provided representative samples from the target posterior are available, our method is more robust to posterior distributions...
Go to contribution page -
Deaglan Bartlett (Institut d'Astrophysique de Paris)Poster
Neural networks are increasingly used to emulate complex simulations due to their speed and efficiency. Unfortunately, many ML algorithms, including (deep) neural networks, lack interpretability. If machines predict something humans do not understand, how can we check (and trust) the results? Even if we could identify potential mistakes, current methods lack effective mechanisms to correct...
Go to contribution page -
Dr Marco LetiziaPoster
Traditional statistical methods are often not adequate to perform inclusive and signal-agnostic searches at modern collider experiments delivering large amounts of multivariate data. Machine learning provides a set of tools to enhance analyses in large scale regimes, but the adoption of these methodologies comes with new challenges, such as the lack of efficiency and robustness, and potential...
Go to contribution page -
Markus Michael RauPoster
The modeling of cosmological observables becomes increasingly complex and we need to rely on computationally costly computer models for scalable inference. I will present a current project on advancing current emulation efforts to include functional input like selection functions into the emulation. In particular I will highlight opportunities to include Machine Learning models into the...
Go to contribution page -
Harry Desmond (University of Portsmouth)Poster
A key challenge in the field of AI is to make machine-assisted discovery interpretable, enabling it not only to uncover correlations but also to improve our physical understanding of the world. A nascent branch of machine learning -- Symbolic Regression (SR) -- aims to discover the optimal functional representations of datasets, producing perfectly interpretable outputs (equations) by...
Go to contribution page -
Matthew Price (Mullard Space Science Laboratory, University College London)Poster
Scattering transforms are a new type of summary statistics recently developed for the study of highly non-Gaussian processes, which have been shown to be very promising for astrophysical studies. In particular, they allow one to build generative models of complex non-linear fields from a limited amount of data, and have also been used as the basis of new statistical component separation...
Go to contribution page -
Samuele Grossi (Universitร degli studi di Genova & INFN sezione di Genova)Poster
I will present and discuss several proposed metrics, based on integral probability measures, for the evaluation of generative models (and, more generally, for the comparison of different generators). Some of the metrics are particularly efficient to be computed in parallel, and show good performances. I will first compare the metrics on toy multivariate/multimodal distributions, and then focus...
Go to contribution page -
Emanuel Lorenz Pfeffer (KIT - Karlsruhe Institute of Technology (DE))Poster
Data analyses in the high-energy particle physics (HEP) community more and more often exploit advanced multivariate methods to separate signal from background processes. In this talk, a maximally unbiased, in-depth comparison of the graph neural network (GNN) architecture, which is of increasing popularity in the HEP community, with the already well-established technology of fully connected...
Go to contribution page -
Sofia Palacios Schweitzer (ITP, University Heidelberg)Poster
Many physics analyses at the LHC rely on algorithms to remove detector effect, commonly known as unfolding. Whereas classical methods only work with binned, one-dimensional data, Machine Learning promises to overcome both problems. Using a generative unfolding pipeline, we show how it can be build into an existing LHC analysis, designed to measure the top mass. We discuss the model-dependence...
Go to contribution page -
Joรฃo A. Gonรงalves (LIP - IST)Poster
The phenomena of Jet Quenching, a key signature of the Quark-Gluon Plasma (QGP) formed in Heavy-Ion (HI) collisions, provides a window of insight into the properties of this primordial liquid. In this study, we rigorously evaluate the discriminating power of Energy Flow Networks (EFNs), enhanced with substructure observables, in distinguishing between jets stemming from proton-proton (pp) and...
Go to contribution page -
Joseph Carmignani (University of Liverpool (GB))Poster
The Multi-disciplinary Use Cases for Convergent new Approaches to AI explainability (MUCCA) project is pioneering efforts to enhance the transparency and interpretability of AI algorithms in complex scientific endeavours. The presented study focuses on the role of Explainable AI (xAI) in the domain of high-energy physics (HEP). Approaches based on Machine Learning (ML) methodologies, from...
Go to contribution page -
Tom Runting (Imperial College (GB))Poster
We present a method to accelerate Effective Field Theory reinterpretations using interpolated likelihoods. By employing Radial Basis Functions for interpolation and Gaussian Processes to strategically select interpolation points, we show that we can reduce the computational burden while maintaining accuracy. We apply this in the context of the Combined Higgs Boson measurement at CMS, a complex...
Go to contribution page -
Kai Lehman (LMU Munich)Poster
How much cosmological information can we reliably extract from existing and upcoming large-scale structure observations? Many summary statistics fall short in describing the non-Gaussian nature of the late-time Universe and modelling uncertainties from baryonic physics. Using simulation based inference (SBI) with automatic data-compression from graph neural networks, we learn optimal summary...
Go to contribution page -
Stephen WattsPoster
The performance of machine learning classification algorithms are evaluated by estimating metrics, often from the confusion matrix, using training data and cross-validation. However, these do not prove that the best possible performance has been achieved. Fundamental limits to error rates can be estimated using information distance measures. To this end, the confusion matrix has been...
Go to contribution page -
Dr William HandleyPoster
Simulation-based inference is undergoing a renaissance in statistics and machine learning. With several packages implementing the state-of-the-art in expressive AI [mackelab/sbi] [undark-lab/swyft], it is now being effectively applied to a wide range of problems in the physical sciences, biology, and beyond.
Given the rapid pace of AI/ML, there is little expectation that the implementations...
Go to contribution page -
Noam Levi (Tel Aviv University)Poster
We introduce Noise Injection Node Regularization (NINR), a method that injects structured noise into Deep Neural Networks (DNNs) during the training stage, resulting in an emergent regularizing effect. We present both theoretical and empirical evidence demonstrating substantial improvements in robustness against various test data perturbations for feed-forward DNNs trained under NINR. The...
Go to contribution page -
Heather Battey (Imperial College London)Poster
Consider a binary mixture model of the form , where is standard normal and is a completely specified heavy-tailed distribution with the same support. Gaussianity of reflects a reduction of the raw data to a set of pivotal test statistics at each site (e.g. an energy level in a particle physics context). For a sample of independent and identically distributed values , the maximum likelihood...
Go to contribution page -
Indranil Das (Imperial College London (GB))Poster
-
Jonathon Mark Langford (Imperial College (GB))
-
Jonathon Mark Langford (Imperial College (GB))
-
Nathan Huetsch (Heidelberg University, ITP Heidelberg)Poster
The matrix element method is the LHC inference method of choice for limited statistics, as it allows for optimal use of available information. We present a dedicated machine learning framework, based on efficient phase-space integration, a learned acceptance and transfer function. It is based on a choice of INN and diffusion networks, and a transformer to solve jet combinatorics. We showcase...
Go to contribution page -
Henry Aldridge (UCL)Poster
Bayesian model selection provides a powerful framework for objectively comparing models directly from observed data, without reference to ground truth data. However, Bayesian model selection requires the computation of the marginal likelihood (model evidence), which is computationally challenging, prohibiting its use in many high-dimensional Bayesian inverse problems. With Bayesian imaging...
Go to contribution page -
Kiyam LinPoster
The standard approach to inference from cosmic large-scale structure data employs summary statistics that are compared to analytic models in a Gaussian likelihood with pre-computed covariance. To overcome many of the idealising assumptions that go into this type of explicit likelihood inference, and to take advantage of the high-fidelity wide field data that Euclid and LSST will provide, we...
Go to contribution page -
Javier Mariรฑo Villadamigo (Institut fรผr Theoretische Physik - University of Heidelberg)Poster
Recent innovations from machine learning allow for data unfolding, without binning and including correlations across many dimensions. We describe a set of known, upgraded, and new methods for ML-based unfolding. The performance of these approaches are evaluated on the same two datasets. We find that all techniques are capable of accurately reproducing the particle-level spectra across complex...
Go to contribution page -
Javier Mariรฑo Villadamigo (Institut fรผr Theoretische Physik - University of Heidelberg)Poster
Recent innovations from machine learning allow for data unfolding, without binning and including correlations across many dimensions. We describe a set of known, upgraded, and new methods for ML-based unfolding. The performance of these approaches are evaluated on the same two datasets. We find that all techniques are capable of accurately reproducing the particle-level spectra across complex...
Go to contribution page -
Nina ElmerPoster
Estimating uncertainties is a fundamental aspect in every physics problem, no measurements or calculations comes without uncertainties. Hence it is crucial to consider the effect of training a neural network to problems in physics. I will present our work on amplitude regression, using loop amplitudes from LHC processes, as an example to examine the impact of different uncertainties on the...
Go to contribution page -
Lars Stietz (Hamburg University of Technology (DE))Poster
Precision measurements at the Large Hadron Collider (LHC), such as the measurement of the top quark mass, are essential for advancing our understanding of fundamental particle physics. Profile likelihood fits have become the standard method to extract physical quantities and parameters from the measurements. These fits incorporate nuisance parameters to include systematic uncertainties. The...
Go to contribution page
Choose timezone
Your profile timezone: