-
Louis Lyons (Imperial College (GB))09/09/2024, 13:45
-
Eleni Tsaprazi09/09/2024, 14:00
-
Daniel Winterbottom (Imperial College (GB))09/09/2024, 14:25
-
Lucas Makinen (Imperial College London)09/09/2024, 15:00
-
Dr Niall Jeffrey (University College London)09/09/2024, 16:15
-
Jonathon Mark Langford (Imperial College (GB))09/09/2024, 16:55
-
10/09/2024, 09:00
-
Gregor Kasieczka (Hamburg University (DE))10/09/2024, 09:15
Machine learning and AI have quickly turned into indispensable tools for modern particle physics. They both greatly amplify the power of existing techniques - such as supercharging supervised classification - and enable qualitatively new ways of extracting information - such as anomaly detection and likelihood-free inference. Accordingly, the underlying statistical machinery needs to be...
Go to contribution page -
Jonas Spinner10/09/2024, 10:00Contributed Talk
Extracting scientific understanding from particle-physics experiments requires solving diverse learning problems with high precision and good data efficiency. We propose the Lorentz Geometric Algebra Transformer (L-GATr), a new multi-purpose architecture for high-energy physics. L-GATr represents high-energy data in a geometric algebra over four-dimensional space-time and is equivariant under...
Go to contribution page -
Ben Wandelt10/09/2024, 11:00
Cosmologists strive to uncover the mysteries of the origin, composition, evolution, and fate of the cosmos from all the information the sky has to offer: the cosmic microwave background, galaxy surveys, exploding stars, and reverberations of space-time caused by colliding black holes and neutron stars. I will discuss new ways to connect cosmological theory and simulation with these data sets....
Go to contribution page -
Maximilian Dax10/09/2024, 11:45
Gravitational waves (GWs) provide a unique window to the universe, enabling us to study mergers of black holes and/or neutron stars. In my talk, I will highlight how machine learning can address critical limitations in GW data analysis. I will present key innovations in this field, driven by unusually high requirements for accuracy, reliability and interpretability. Finally, I will discuss how...
Go to contribution page -
Jesse Thaler (MIT/IAIFI)10/09/2024, 14:00
The term "interpretability" encompasses various strategies to scrutinize the decisions made by machine learning algorithms. In this talk, I argue that interpretability, at least in the context of particle physics, should be considered as part of the broader goal of assessing systematic uncertainties. I provide examples from my own research on jet physics at the Large Hadron Collider, where...
Go to contribution page -
Lily Zhang10/09/2024, 14:45
In this talk, we present an overview of anomaly detection from a probabilistic machine learning perspective, with a focus on work emerging from the machine learning literature. First, we discuss empirical failures of deep generative models for anomaly detection and why they occur, as well as their implications for deep generative modeling and anomaly detection. Then, we discuss the endeavor of...
Go to contribution page -
Gaia Grosso10/09/2024, 16:00
Signal-agnostic data exploration could unveil very subtle statistical deviations of collider data from the expected Standard Model of particle physics. However, the extreme size, rate and complexity of the datasets generated at the Large Hadron Collider (LHC) pose unique challenges for data analysis. Making assumptions about what is relevant becomes unavoidable to scale the information down to...
Go to contribution page -
Thea Aarrestad (ETH Zurich (CH))10/09/2024, 16:45
Anomaly detection has emerged as a promising technique for identifying subtle New Physics signals amidst a dominant Standard Model background. Due to the novelty of these techniques, they are often proposed and demonstrated on toy datasets that mimic real LHC data before being deployed in actual experiments. In this talk, we will discuss the challenges encountered during the transition from...
Go to contribution page -
Andre Joshua Scaffidi10/09/2024, 17:10Contributed Talk
This talk presents a novel approach to dark matter direct detection using anomaly-aware machine learning techniques in the DARWIN next-generation dark matter direct detection experiment. I will introduce a semi-unsupervised deep learning pipeline that falls under the umbrella of generalized Simulation-Based Inference (SBI), an approach that allows one to effectively learn likelihoods straight...
Go to contribution page -
Joshua Villarreal10/09/2024, 17:35Contributed Talk
The statistical treatment of sterile neutrino searches suffers from the fact that Wilks’ theorem, a beneficial simplifying assumption, does not hold across all regions of parameter space. The alternative, the Feldman-Cousins algorithm, suffers from expensive computational run times that prohibit its application into many-experiment global fits. This contribution introduces a deep...
Go to contribution page -
Nina Elmer10/09/2024, 18:00Poster
Estimating uncertainties is a fundamental aspect in every physics problem, no measurements or calculations comes without uncertainties. Hence it is crucial to consider the effect of training a neural network to problems in physics. I will present our work on amplitude regression, using loop amplitudes from LHC processes, as an example to examine the impact of different uncertainties on the...
Go to contribution page -
Samuele Grossi (Università degli studi di Genova & INFN sezione di Genova)10/09/2024, 18:01Poster
I will present and discuss several proposed metrics, based on integral probability measures, for the evaluation of generative models (and, more generally, for the comparison of different generators). Some of the metrics are particularly efficient to be computed in parallel, and show good performances. I will first compare the metrics on toy multivariate/multimodal distributions, and then focus...
Go to contribution page -
Stephen Watts10/09/2024, 18:02Poster
The performance of machine learning classification algorithms are evaluated by estimating metrics, often from the confusion matrix, using training data and cross-validation. However, these do not prove that the best possible performance has been achieved. Fundamental limits to error rates can be estimated using information distance measures. To this end, the confusion matrix has been...
Go to contribution page -
Emanuel Lorenz Pfeffer (KIT - Karlsruhe Institute of Technology (DE))10/09/2024, 18:03
Data analyses in the high-energy particle physics (HEP) community more and more often exploit advanced multivariate methods to separate signal from background processes. In this talk, a maximally unbiased, in-depth comparison of the graph neural network (GNN) architecture, which is of increasing popularity in the HEP community, with the already well-established technology of fully connected...
Go to contribution page -
Tom Runting (Imperial College (GB))10/09/2024, 18:04Poster
We present a method to accelerate Effective Field Theory reinterpretations using interpolated likelihoods. By employing Radial Basis Functions for interpolation and Gaussian Processes to strategically select interpolation points, we show that we can reduce the computational burden while maintaining accuracy. We apply this in the context of the Combined Higgs Boson measurement at CMS, a complex...
Go to contribution page -
Dr Marco Letizia10/09/2024, 18:05Poster
Traditional statistical methods are often not adequate to perform inclusive and signal-agnostic searches at modern collider experiments delivering large amounts of multivariate data. Machine learning provides a set of tools to enhance analyses in large scale regimes, but the adoption of these methodologies comes with new challenges, such as the lack of efficiency and robustness, and potential...
Go to contribution page -
Joseph Carmignani (University of Liverpool (GB))10/09/2024, 18:06Poster
“The Multi-disciplinary Use Cases for Convergent Approaches to AI Explainability (MUCCA) project is pioneering efforts to enhance the transparency and interpretability of AI algorithms in complex scientific fields. This study focuses on the application of Explainable AI (XAI) in high-energy physics (HEP), utilising a range of machine learning (ML) methodologies, from classical boosted decision...
Go to contribution page -
Henry Aldridge (UCL)10/09/2024, 18:07Poster
Bayesian model selection provides a powerful framework for objectively comparing models directly from observed data, without reference to ground truth data. However, Bayesian model selection requires the computation of the marginal likelihood (model evidence), which is computationally challenging, prohibiting its use in many high-dimensional Bayesian inverse problems. With Bayesian imaging...
Go to contribution page -
Matt Price10/09/2024, 18:08Poster
Scattering transforms are a new type of summary statistics recently developed for the study of highly non-Gaussian processes, which have been shown to be very promising for astrophysical studies. In particular, they allow one to build generative models of complex non-linear fields from a limited amount of data, and have also been used as the basis of new statistical component separation...
Go to contribution page -
Giovanni De Crescenzo (University of Heidelberg)10/09/2024, 18:09Poster
We present an application of Simulation-Based Inference (SBI) in collider physics, aiming to constrain anomalous interactions beyond the Standard Model (SM). This is achieved by leveraging Neural Networks to learn otherwise intractable likelihood ratios. We explore methods to incorporate the underlying physics structure into the likelihood estimation process. Specifically, we compare two...
Go to contribution page -
Kiyam Lin10/09/2024, 18:10Poster
The standard approach to inference from cosmic large-scale structure data employs summary statistics that are compared to analytic models in a Gaussian likelihood with pre-computed covariance. To overcome many of the idealising assumptions that go into this type of explicit likelihood inference, and to take advantage of the high-fidelity wide field data that Euclid and LSST will provide, we...
Go to contribution page -
Harry Desmond (University of Portsmouth)10/09/2024, 18:11Poster
A key challenge in the field of AI is to make machine-assisted discovery interpretable, enabling it not only to uncover correlations but also to improve our physical understanding of the world. A nascent branch of machine learning – Symbolic Regression (SR) – aims to discover the optimal functional representations of datasets, producing perfectly interpretable outputs (equations) by...
Go to contribution page -
Lars Stietz (Hamburg University of Technology (DE))10/09/2024, 18:12Poster
Precision measurements at the Large Hadron Collider (LHC), such as the measurement of the top quark mass, are essential for advancing our understanding of fundamental particle physics. Profile likelihood fits have become the standard method to extract physical quantities and parameters from the measurements. These fits incorporate nuisance parameters to include systematic uncertainties. The...
Go to contribution page -
Benjamin Boyd (University of Cambridge)10/09/2024, 18:13Poster
Type Ia supernovae (SNe Ia) are thermonuclear exploding stars that can be used to put constraints on the nature of our universe. One challenge with population analyses of SNe Ia is Malmquist bias, where we preferentially observe the brighter SNe due to limitations of our telescopes. If untreated, this bias can propagate through to our posteriors on cosmological parameters. In this work, we...
Go to contribution page -
Deaglan Bartlett (Institut d'Astrophysique de Paris)10/09/2024, 18:14Poster
Neural networks are increasingly used to emulate complex simulations due to their speed and efficiency. Unfortunately, many ML algorithms, including (deep) neural networks, lack interpretability. If machines predict something humans do not understand, how can we check (and trust) the results? Even if we could identify potential mistakes, current methods lack effective mechanisms to correct...
Go to contribution page -
Monika Machalová10/09/2024, 18:15Poster
The aim of this work is to solve the problem of hadronic jet substructure recognition using classical subjettiness variables available in the parameterized detector simulation package, Delphes. Jets produced in simulated proton-proton collisions are identified as either originating from the decay of a top quark or a W boson and are used to reconstruct the mass of a hypothetical scalar...
Go to contribution page -
Alessio Spurio Mancini (Royal Holloway, University of London)10/09/2024, 18:16Poster
A new generation of astronomical surveys, such as the recently launched European Space Agency’s Euclid mission, will soon deliver exquisite datasets with unparalleled amounts of cosmological information, poised to change our understanding of the Universe. However, analysing these datasets presents unprecedented statistical challenges. Multiple systematic effects need to be carefully accounted...
Go to contribution page -
Kai Lehman (LMU Munich)10/09/2024, 18:17Poster
How much cosmological information can we reliably extract from existing and upcoming large-scale structure observations? Many summary statistics fall short in describing the non-Gaussian nature of the late-time Universe and modelling uncertainties from baryonic physics. Using simulation based inference (SBI) with automatic data-compression from graph neural networks, we learn optimal summary...
Go to contribution page -
Sofia Palacios Schweitzer (Heidelberg), Tilman Plehn (Heidelberg University)10/09/2024, 18:18Poster
Many physics analyses at the LHC rely on algorithms to remove detector effect, commonly known as unfolding. Whereas classical methods only work with binned, one-dimensional data, Machine Learning promises to overcome both problems. Using a generative unfolding pipeline, we show how it can be build into an existing LHC analysis, designed to measure the top mass. We discuss the model-dependence...
Go to contribution page -
Noam Levi (Tel Aviv University)10/09/2024, 18:19Poster
We introduce Noise Injection Node Regularization (NINR), a method that injects structured noise into Deep Neural Networks (DNNs) during the training stage, resulting in an emergent regularizing effect. We present both theoretical and empirical evidence demonstrating substantial improvements in robustness against various test data perturbations for feed-forward DNNs trained under NINR. The...
Go to contribution page -
Yuval Yitzhak Frid (Tel Aviv University (IL))10/09/2024, 18:20Poster
Background modeling is one of the critical elements of searches for new physics at experiments at the Large Hadron Collider. In many searches, backgrounds are modeled using analytic functional forms. Finding an acceptable function can be complicated, inefficient and time-consuming. This poster presents a novel approach to estimating the underlying PDF of a 1D dataset of samples using Log...
Go to contribution page -
Nathan Huetsch (Heidelberg), Tilman Plehn (Heidelberg University)10/09/2024, 18:21Poster
The matrix element method is the LHC inference method of choice for limited statistics, as it allows for optimal use of available information. We present a dedicated machine learning framework, based on efficient phase-space integration, a learned acceptance and transfer function. It is based on a choice of INN and diffusion networks, and a transformer to solve jet combinatorics. We showcase...
Go to contribution page -
Tilman Plehn (Heidelberg University), Xavier Marino (Heidelberg)10/09/2024, 18:22
Recent innovations from machine learning allow for data unfolding, without binning and including correlations across many dimensions. We describe a set of known, upgraded, and new methods for ML-based unfolding. The performance of these approaches are evaluated on the same two datasets. We find that all techniques are capable of accurately reproducing the particle-level spectra across complex...
Go to contribution page -
Pierre Baldi11/09/2024, 09:00
I plan to touch on several theoretical topics (overparameterization, neural balance, attention and transformers) and their applications in physics and end on a proposal to solve some of the societal issues raised by AI inspired by physics.
Go to contribution page -
Jeyan Thiyagalingam (Rutherford Appleton Laboratory, Science and Technology Facilities Council)11/09/2024, 09:45
-
112. Improved Weak Lensing Photometric Redshift Calibration via StratLearn and Hierarchical ModelingMaximilian Autenrieth (Imperial College London)11/09/2024, 10:10Contributed Talk
Discrepancies between cosmological parameter estimates from cosmic shear surveys and from recent Planck cosmic microwave background measurements challenge the ability of the highly successful ΛCDM model to describe the nature of the Universe. To rule out systematic biases in cosmic shear survey analyses, accurate redshift calibration within tomographic bins is key. In this work, we improve...
Go to contribution page -
Aishik Ghosh (University of California Irvine (US))11/09/2024, 11:00
A powerful class of statistical inference methods are starting to be used in across fields that leverage the power of machine learning (ML) to perform inference directly from high-dimensional data. They can be used, for instance, to estimate fundamental physics parameters from data collected in high energy physics experiments, or cosmological / astrophysics observations and work with both...
Go to contribution page -
Vinicius Mikuni (LBL)11/09/2024, 11:45
Correcting experimental measurements for detector effects, or unfolding, is a standard technique used at the LHC to report multi-differential cross section measurements. These techniques rely on binned data and are limited to low dimensional observables. In this talk, I will cover recent ideas to extend standard methods of unfolding using machine learning, enabling the measurements of ...
Go to contribution page -
Tilman Plehn (Heidelberg University)11/09/2024, 14:00
Looking for a way modern machine learning transforms LHC physics, unfolding has for a long time been one of our goal, and only modern networks allow us to do this meaningfully. It does not only make analyses with a wide range of theory hypotheses more efficient, it also allows the LHC collaborations to publish their data. I will show how generative networks can be used for this purpose,...
Go to contribution page -
Dr William Handley11/09/2024, 14:25Contributed Talk
Simulation-based inference is undergoing a renaissance in statistics and machine learning. With several packages implementing the state-of-the-art in expressive AI [mackelab/sbi] [undark-lab/swyft], it is now being effectively applied to a wide range of problems in the physical sciences, biology, and beyond.
Given the rapid pace of AI/ML, there is little expectation that the implementations...
Go to contribution page -
Dr Purvasha Chakravarti (UCL)11/09/2024, 14:50
New physics searches are usually done by training a supervised classifier to separate a signal model from the known Standard Model physics (also called the background model). However, even when the signal model is correct, systematic errors in the background model can influence supervised classifiers and might adversely affect the signal detection procedure. To tackle this problem, one...
Go to contribution page -
Ramon Winterhalder (UCLouvain)11/09/2024, 15:15
In recent years, deep generative models (DGMs) have become essential for various steps in the LHC simulation and analysis chain. While there are many types of DGMs, no Swiss-army-knife architecture exists that can effectively handle speed, precision, and control simultaneously. In this talk, I will explore different DGMs, outline their strengths and weaknesses, and illustrate typical...
Go to contribution page -
Oliver Rieger (Nikhef National institute for subatomic physics (NL))11/09/2024, 16:30Contributed Talk
In social sciences, fairness in Machine Learning (ML) comprises the attempt to correct or eliminate algorithmic bias of gender, ethnicity, or sexual orientation from ML models. Many high-energy physics (HEP) analyses that search for a resonant decay of a particle employ mass-decorrelated event classifiers, as the particle mass is often used to perform the final signal extraction fit. These...
Go to contribution page -
Kyle Stuart Cranmer (University of Wisconsin Madison (US))11/09/2024, 16:55
Systematic uncertainties usually have a negative connotation since they reduce the sensitivity of an experiment. However, the practical and conceptual challenges posed by various types of systematic uncertainty also have a long track record of motivating new ideas. I will outline some examples for my own career where systematics were my muse for innovation.
Go to contribution page -
Artur Monsch (KIT - Karlsruhe Institute of Technology (DE))11/09/2024, 17:40Contributed Talk
We demonstrate a neural network training, capable of accounting for the effects of systematic variations of the utilized data model in the training process and describe its extension towards neural network multiclass classification. We show the importance of adjusting backpropagation to be able to handle derivatives of histogram bins during training and add an interpretation of the...
Go to contribution page -
Alexander Held (University of Wisconsin Madison (US))12/09/2024, 09:00
The field of high energy physics (HEP) benefits immensely from sophisticated simulators and data-driven techniques to perform measurements of nature at increasingly higher precision. Using the example of HEP, I will describe how and where uncertainties are incorporated into data analysis to address model misspecification concerns. My focus will be how machine learning (ML), in the variety of...
Go to contribution page -
Alicja Polanska (University College London)12/09/2024, 09:45Contributed Talk
Computing the Bayesian evidence is an important task in Bayesian model selection, providing a principled quantitative way to compare models. In this work, we introduce normalizing flows to improve the learned harmonic mean estimator of the Bayesian evidence. This recently presented estimator leverages machine learning to address the exploding variance problem associated with the original...
Go to contribution page -
Mikael Kuusela (Carnegie Mellon University (US))12/09/2024, 10:45
Many model-independent search methods can be understood as performing a high-dimensional two-sample test. The test is typically performed by training a neural network over the high-dimensional feature space. If the test indicates a significant deviation from the background, it would be desirable to be able to characterize the "signal" the network may have found. In this talk, I will describe...
Go to contribution page -
Hiranya Peiris12/09/2024, 11:15
I will present a perspective that explainability — model interrogation and validation rooted in domain knowledge — is a more important desideratum in fundamental science than interpretability in its strict meaning. In order to illustrate this point, I will draw on our recent work on pop-cosmos: a forward modelling framework for photometric galaxy survey data, where galaxies are modelled as...
Go to contribution page -
Philipp Eller (Wisconsin)12/09/2024, 11:45
Astrophysical tau neutrinos were predicted for a long time, but only recently has IceCube been able to identify those at the 5 sigma significance level. The key to this discovery was using machine learning methods to analyse the data. In this talk, I will first give a brief overview of the analysis and results before we dive deeper into the neural nets. We will try to understand how they work...
Go to contribution page -
Tobias Golling (Universite de Geneve (CH))12/09/2024, 12:10
“If you can simulate it, you can learn it.” The concept of conditional generation is powerful and versatile. The heavy lifting is distributed over a generator of a latent distribution of interest and an embedding network to encode the information contained in the data. Concrete applications to the reconstruction of neutrino kinematics in LHC collisions and associated interpretability...
Go to contribution page -
Gilles Louppe, Gilles Louppe12/09/2024, 14:00
-
Mikael Kuusela (Carnegie Mellon University (US))12/09/2024, 14:45
-
Luisa Lucie-Smith12/09/2024, 16:00
-
Lukas Alexander Heinrich (Technische Universitat Munchen (DE))12/09/2024, 16:45
-
12/09/2024, 17:30
-
Alessio Spurio Mancini (Royal Holloway, University of London)Poster
A new generation of astronomical surveys, such as the recently launched European Space Agency’s Euclid mission, will soon deliver exquisite datasets with unparalleled amounts of cosmological information, poised to change our understanding of the Universe. However, analysing these datasets presents unprecedented statistical challenges. Multiple systematic effects need to be carefully...
Go to contribution page -
Alessio Spurio Mancini (Department of Physics, Royal Holloway, University of London)Poster
A new generation of astronomical surveys, such as the recently launched European Space Agency’s Euclid mission, will soon deliver exquisite datasets with unparalleled amounts of cosmological information, poised to change our understanding of the Universe. However, analysing these datasets presents unprecedented statistical challenges. Multiple systematic effects need to be carefully accounted...
Go to contribution page -
Benjamin Boyd (University of Cambridge)Poster
Type Ia supernovae (SNe Ia) are thermonuclear exploding stars that can be used to put constraints on the nature of our universe. One challenge with population analyses of SNe Ia is Malmquist bias, where we preferentially observe the brighter SNe due to limitations of our telescopes. If untreated, this bias can propagate through to our posteriors on cosmological parameters. In this work, we...
Go to contribution page -
Benjamin Boyd (University of Cambridge)Poster
Type Ia supernovae (SNe Ia) are thermonuclear exploding stars that can be used to put constraints on the nature of our universe. One challenge with population analyses of SNe Ia is Malmquist bias, where we preferentially observe the brighter SNe due to limitations of our telescopes. If untreated, this bias can propagate through to our posteriors on cosmological parameters. In this work, we...
Go to contribution page -
Giovanni De Crescenzo (University of Heidelberg)Poster
We present an application of Simulation-Based Inference (SBI) in collider physics, aiming to constrain anomalous interactions beyond the Standard Model (SM). This is achieved by leveraging Neural Networks to learn otherwise intractable likelihood ratios. We explore methods to incorporate the underlying physics structure into the likelihood estimation process. Specifically, we compare two...
Go to contribution page -
Giovanni De Crescenzo (University of Heidelberg)Poster
We present an application of Simulation-Based Inference (SBI) in collider physics, aiming to constrain anomalous interactions beyond the Standard Model (SM). This is achieved by leveraging Neural Networks to learn otherwise intractable likelihood ratios. We explore methods to incorporate the underlying physics structure into the likelihood estimation process. Specifically, we compare two...
Go to contribution page -
Víctor Bresó Pla (University of Heidelberg)Poster
We present a detailed comparison of multiple interpolation methods to characterize the amplitude distribution of several Higgs boson production modes at the LHC. Apart from standard interpolation techniques, we develop a new approach based on the use of the Lorentz Geometric Algebra Transformer (L-GATr). L-GATr is an equivariant neural network that is able to encode Lorentz and permutation...
Go to contribution page -
Víctor Bresó Pla (University of Heidelberg)Poster
We present a detailed comparison of multiple interpolation methods to characterize the amplitude distribution of several Higgs boson production modes at the LHC. Apart from standard interpolation techniques, we develop a new approach based on the use of the Lorentz Geometric Algebra Transformer (L-GATr). L-GATr is an equivariant neural network that is able to encode Lorentz and permutation...
Go to contribution page -
Lily Zhang
-
Andre Joshua ScaffidiContributed Talk
This talk presents a novel approach to dark matter direct detection using anomaly-aware machine learning techniques in the DARWIN next-generation dark matter direct detection experiment. I will introduce a semi-unsupervised deep learning pipeline that falls under the umbrella of generalized Simulation-Based Inference (SBI), an approach that allows one to effectively learn likelihoods straight...
Go to contribution page -
Monika Machalová (Department of Mathematical Analysis and Applications of Mathematics, of Palacký University Olomouc, Czech Republic)Poster
The aim of this work is to solve the problem of hadronic jet substructure recognition using classical subjettiness variables available in the parameterized detector simulation package, Delphes. Jets produced in simulated proton-proton collisions are identified as either originating from the decay of a top quark or a W boson and are used to reconstruct the mass of a hypothetical scalar...
Go to contribution page -
Monika MachalováPoster
The aim of this work is to solve the problem of hadronic jet substructure recognition using classical subjettiness variables available in the parameterized detector simulation package, Delphes. Jets produced in simulated proton-proton collisions are identified as either originating from the decay of a top quark or a W boson and are used to reconstruct the mass of a hypothetical scalar...
Go to contribution page -
Lucas Makinen (Imperial College London)
-
Rahul SrinivasanPoster
Using ${\it floZ}$, an improved Bayesian evidence (and its numerical uncertainty) estimation method based on normalizing flows, we estimate the Bayes factor in favor of gravitational wave overtones in the ringdown of the first detection. We find good agreement with nested sampling. Provided representative samples from the target posterior are available, our method is more robust to posterior...
Go to contribution page -
Rahul SrinivasanPoster
Using , an improved Bayesian evidence (and its numerical uncertainty) estimation method based on normalizing flows, we estimate the Bayes factor in favor of gravitational wave overtones in the ringdown of the first detection. We find good agreement with nested sampling. Provided representative samples from the target posterior are available, our method is more robust to posterior distributions...
Go to contribution page -
Rahul SrinivasanPoster
Using floZ, an improved Bayesian evidence (and its numerical uncertainty) estimation method based on normalizing flows, we estimate the Bayes factor in favor of gravitational wave overtones in the ringdown of the first detection. We find good agreement with nested sampling. Provided representative samples from the target posterior are available, our method is more robust to posterior...
Go to contribution page -
Deaglan Bartlett (Institut d'Astrophysique de Paris)Poster
Neural networks are increasingly used to emulate complex simulations due to their speed and efficiency. Unfortunately, many ML algorithms, including (deep) neural networks, lack interpretability. If machines predict something humans do not understand, how can we check (and trust) the results? Even if we could identify potential mistakes, current methods lack effective mechanisms to correct...
Go to contribution page -
Deaglan Bartlett (Institut d'Astrophysique de Paris)Poster
Neural networks are increasingly used to emulate complex simulations due to their speed and efficiency. Unfortunately, many ML algorithms, including (deep) neural networks, lack interpretability. If machines predict something humans do not understand, how can we check (and trust) the results? Even if we could identify potential mistakes, current methods lack effective mechanisms to correct...
Go to contribution page -
Artur Monsch (KIT - Karlsruhe Institute of Technology (DE))Contributed Talk
We demonstrate a neural network training, capable of accounting for the effects of
Go to contribution page
systematic variations of the utilized data model in the training process and describe
its extension towards neural network multiclass classification. We show the importance
of adjusting backpropagation to be able to handle derivatives of histogram bins during
training and add an interpretation of the... -
Dr Marco Letizia (University of Genoa and INFN)Poster
Traditional statistical methods are often not adequate to perform inclusive and signal-agnostic searches at modern collider experiments delivering large amounts of multivariate data. Machine learning provides a set of tools to enhance analyses in large scale regimes, but the adoption of these methodologies comes with new challenges, such as the lack of efficiency and robustness, and potential...
Go to contribution page -
Dr Marco LetiziaPoster
Traditional statistical methods are often not adequate to perform inclusive and signal-agnostic searches at modern collider experiments delivering large amounts of multivariate data. Machine learning provides a set of tools to enhance analyses in large scale regimes, but the adoption of these methodologies comes with new challenges, such as the lack of efficiency and robustness, and potential...
Go to contribution page -
Markus Michael RauPoster
The modeling of cosmological observables becomes increasingly complex and we need to rely on computationally costly computer models for scalable inference. I will present a current project on advancing current emulation efforts to include functional input like selection functions into the emulation. In particular I will highlight opportunities to include Machine Learning models into the...
Go to contribution page -
Markus Michael RauPoster
The modeling of cosmological observables becomes increasingly complex and we need to rely on computationally costly computer models for scalable inference. I will present a current project on advancing current emulation efforts to include functional input like selection functions into the emulation. In particular I will highlight opportunities to include Machine Learning models into the...
Go to contribution page -
Harry Desmond (University of Portsmouth)Poster
A key challenge in the field of AI is to make machine-assisted discovery interpretable, enabling it not only to uncover correlations but also to improve our physical understanding of the world. A nascent branch of machine learning -- Symbolic Regression (SR) -- aims to discover the optimal functional representations of datasets, producing perfectly interpretable outputs (equations) by...
Go to contribution page -
Harry Desmond (University of Portsmouth)Poster
A key challenge in the field of AI is to make machine-assisted discovery interpretable, enabling it not only to uncover correlations but also to improve our physical understanding of the world. A nascent branch of machine learning -- Symbolic Regression (SR) -- aims to discover the optimal functional representations of datasets, producing perfectly interpretable outputs (equations) by...
Go to contribution page -
Oliver Rieger (Nikhef National institute for subatomic physics (NL))Contributed Talk
In social sciences, fairness in Machine Learning (ML) comprises the attempt to correct or eliminate algorithmic bias of gender, ethnicity, or sexual orientation from ML models. Many high-energy physics (HEP) analyses that search for a resonant decay of a particle employ mass-decorrelated event classifiers, as the particle mass is often used to perform the final signal extraction fit. These...
Go to contribution page -
Oliver Rieger (Nikhef National institute for subatomic physics (NL))Contributed Talk
In social sciences, fairness in Machine Learning (ML) comprises the attempt to correct or eliminate algorithmic bias of gender, ethnicity, or sexual orientation from ML models. Many high-energy physics (HEP) analyses that search for a resonant decay of a particle employ mass-decorrelated event classifiers, as the particle mass is often used to perform the final signal extraction fit. These...
Go to contribution page -
Josh VillarrealContributed Talk
The statistical treatment of sterile neutrino searches suffers from the fact that Wilks' theorem, a beneficial simplifying assumption, does not hold across all regions of parameter space. The alternative, the Feldman-Cousins algorithm, suffers from expensive computational runtimes that prohibit its application into many-experiment global fits. This contribution introduces a deep learning-based...
Go to contribution page -
Josh VillarrealContributed Talk
The statistical treatment of sterile neutrino searches suffers from the fact that Wilks’ theorem, a beneficial simplifying assumption, does not hold across all regions of parameter space. The alternative, the Feldman-Cousins algorithm, suffers from expensive computational runtimes that prohibit its application into many-experiment global fits. This contribution introduces a deep learning-based...
Go to contribution page -
Carolina Cuesta LazaroContributed Talk
Machine learning applications in cosmological galaxy surveys face challenges due to our limited understanding of the galaxy distribution within the dark matter cosmic web. This issue reflects a broader problem of model misspecification in simulation-based inference. In astrophysics, fully simulating the universe requires solving for gravity and the evolution of stars, galaxies, and black holes...
Go to contribution page -
Jason McEwenPoster
Scattering transforms are a new type of summary statistics recently developed for the study of highly non-Gaussian processes, which have been shown to be very promising for astrophysical studies. In particular, they allow one to build generative models of complex non-linear fields from a limited amount of data, and have also been used as the basis of new statistical component separation...
Go to contribution page -
Matthew Price (Mullard Space Science Laboratory, University College London)Poster
Scattering transforms are a new type of summary statistics recently developed for the study of highly non-Gaussian processes, which have been shown to be very promising for astrophysical studies. In particular, they allow one to build generative models of complex non-linear fields from a limited amount of data, and have also been used as the basis of new statistical component separation...
Go to contribution page -
Riccardo Torre (INFN e Universita Genova (IT))Poster
I will present and discuss several proposed metrics, based on integral probability measures, for the evaluation of generative models (and, more generally, for the comparison of different generators). Some of the metrics are particularly efficient to be computed in parallel, and show good performances. I will first compare the metrics on toy multivariate/multimodal distributions, and then focus...
Go to contribution page -
Samuele Grossi (Università degli studi di Genova & INFN sezione di Genova)Poster
I will present and discuss several proposed metrics, based on integral probability measures, for the evaluation of generative models (and, more generally, for the comparison of different generators). Some of the metrics are particularly efficient to be computed in parallel, and show good performances. I will first compare the metrics on toy multivariate/multimodal distributions, and then focus...
Go to contribution page -
Emanuel Lorenz Pfeffer (KIT - Karlsruhe Institute of Technology (DE))Poster
Data analyses in the high-energy particle physics (HEP) community more and more often exploit advanced multivariate methods to separate signal from background processes. In this talk, a maximally unbiased, in-depth comparison of the graph neural network (GNN) architecture, which is of increasing popularity in the HEP community, with the already well-established technology of fully connected...
Go to contribution page -
Emanuel Lorenz Pfeffer (KIT - Karlsruhe Institute of Technology (DE))Poster
Data analyses in the high-energy particle physics (HEP) community more and more often exploit advanced multivariate methods to separate signal from background processes. In this talk, a maximally unbiased, in-depth comparison of the graph neural network (GNN) architecture, which is of increasing popularity in the HEP community, with the already well-established technology of fully connected...
Go to contribution page -
Sofia Palacios Schweitzer (ITP, University Heidelberg)Poster
Many physics analyses at the LHC rely on algorithms to remove detector effect, commonly known as unfolding. Whereas classical methods only work with binned, one-dimensional data, Machine Learning promises to overcome both problems. Using a generative unfolding pipeline, we show how it can be build into an existing LHC analysis, designed to measure the top mass. We discuss the model-dependence...
Go to contribution page -
Sofia Palacios Schweitzer (ITP, University Heidelberg)Poster
Many physics analyses at the LHC rely on algorithms to remove detector effect, commonly known as unfolding. Whereas classical methods only work with binned, one-dimensional data, Machine Learning promises to overcome both problems. Using a generative unfolding pipeline, we show how it can be build into an existing LHC analysis, designed to measure the top mass. We discuss the model-dependence...
Go to contribution page -
Dr Maximilian Autenrieth (Imperial College London)Contributed Talk
Discrepancies between cosmological parameter estimates from cosmic shear surveys and from recent Planck cosmic microwave background measurements challenge the ability of the highly successful $\Lambda$CDM model to describe the nature of the Universe. To rule out systematic biases in cosmic shear survey analyses, accurate redshift calibration within tomographic bins is key. In this work, we...
Go to contribution page -
João A. Gonçalves (LIP - IST)Poster
The phenomena of Jet Quenching, a key signature of the Quark-Gluon Plasma (QGP) formed in Heavy-Ion (HI) collisions, provides a window of insight into the properties of this primordial liquid. In this study, we rigorously evaluate the discriminating power of Energy Flow Networks (EFNs), enhanced with substructure observables, in distinguishing between jets stemming from proton-proton (pp) and...
Go to contribution page -
João A. Gonçalves (LIP - IST)Poster
The phenomena of Jet Quenching, a key signature of the Quark-Gluon Plasma (QGP) formed in Heavy-Ion (HI) collisions, provides a window of insight into the properties of this primordial liquid. In this study, we rigorously evaluate the discriminating power of Energy Flow Networks (EFNs), enhanced with substructure observables, in distinguishing between jets stemming from proton-proton (pp) and...
Go to contribution page -
Joseph Carmignani (University of Liverpool (GB))Poster
The Multi-disciplinary Use Cases for Convergent new Approaches to AI explainability (MUCCA) project is pioneering efforts to enhance the transparency and interpretability of AI algorithms in complex scientific endeavours. The presented study focuses on the role of Explainable AI (xAI) in the domain of high-energy physics (HEP). Approaches based on Machine Learning (ML) methodologies, from...
Go to contribution page -
Joseph Carmignani (University of Liverpool (GB))Poster
The Multi-disciplinary Use Cases for Convergent new Approaches to AI explainability (MUCCA) project is pioneering efforts to enhance the transparency and interpretability of AI algorithms in complex scientific endeavours. The presented study focuses on the role of Explainable AI (xAI) in the domain of high-energy physics (HEP). Approaches based on Machine Learning (ML) methodologies, from...
Go to contribution page -
Tom Runting (Imperial College (GB))Poster
We present a method to accelerate Effective Field Theory reinterpretations using interpolated likelihoods. By employing Radial Basis Functions for interpolation and Gaussian Processes to strategically select interpolation points, we show that we can reduce the computational burden while maintaining accuracy. We apply this in the context of the Combined Higgs Boson measurement at CMS, a complex...
Go to contribution page -
Tom Runting (Imperial College (GB))Poster
We present a method to accelerate Effective Field Theory reinterpretations using interpolated likelihoods. By employing Radial Basis Functions for interpolation and Gaussian Processes to strategically select interpolation points, we show that we can reduce the computational burden while maintaining accuracy. We apply this in the context of the Combined Higgs Boson measurement at CMS, a complex...
Go to contribution page -
Alicja Polanska (Mullard Space Science Laboratory, University College London)Contributed Talk
Computing the Bayesian evidence is an important task in Bayesian model selection, providing a principled quantitative way to compare models. In this work, we introduce normalizing flows to improve the learned harmonic mean estimator of the Bayesian evidence. This recently presented estimator leverages machine learning to address the exploding variance problem associated with the original...
Go to contribution page -
Kai Lehman (LMU Munich)Poster
How much cosmological information can we reliably extract from existing and upcoming large-scale structure observations? Many summary statistics fall short in describing the non-Gaussian nature of the late-time Universe and modelling uncertainties from baryonic physics. Using simulation based inference (SBI) with automatic data-compression from graph neural networks, we learn optimal summary...
Go to contribution page -
Kai Lehman (LMU Munich)Poster
How much cosmological information can we reliably extract from existing and upcoming large-scale structure observations? Many summary statistics fall short in describing the non-Gaussian nature of the late-time Universe and modelling uncertainties from baryonic physics. Using simulation based inference (SBI) with automatic data-compression from graph neural networks, we learn optimal summary...
Go to contribution page -
Prof. Stephen Watts (University of Manchester)Poster
The performance of machine learning classification algorithms are evaluated by estimating metrics, often from the confusion matrix, using training data and cross-validation. However, these do not prove that the best possible performance has been achieved. Fundamental limits to error rates can be estimated using information distance measures. To this end, the confusion matrix has been...
Go to contribution page -
Stephen WattsPoster
The performance of machine learning classification algorithms are evaluated by estimating metrics, often from the confusion matrix, using training data and cross-validation. However, these do not prove that the best possible performance has been achieved. Fundamental limits to error rates can be estimated using information distance measures. To this end, the confusion matrix has been...
Go to contribution page -
Jonas SpinnerContributed Talk
Extracting scientific understanding from particle-physics experiments requires solving diverse learning problems with high precision and good data efficiency. We propose the Lorentz Geometric Algebra Transformer (L-GATr), a new multi-purpose architecture for high-energy physics. L-GATr represents high-energy data in a geometric algebra over four-dimensional space-time and is equivariant under...
Go to contribution page -
Dr William HandleyPoster
Simulation-based inference is undergoing a renaissance in statistics and machine learning. With several packages implementing the state-of-the-art in expressive AI [mackelab/sbi] [undark-lab/swyft], it is now being effectively applied to a wide range of problems in the physical sciences, biology, and beyond.
Given the rapid pace of AI/ML, there is little expectation that the...
Go to contribution page -
Dr William HandleyPoster
Simulation-based inference is undergoing a renaissance in statistics and machine learning. With several packages implementing the state-of-the-art in expressive AI [mackelab/sbi] [undark-lab/swyft], it is now being effectively applied to a wide range of problems in the physical sciences, biology, and beyond.
Given the rapid pace of AI/ML, there is little expectation that the implementations...
Go to contribution page -
Jonathon Mark Langford (Imperial College (GB))
-
Yuval Yitzhak Frid (Tel Aviv University (IL))Poster
Background modeling is one of the critical elements of searches for new physics at experiments at the Large Hadron Collider. In many searches, backgrounds are modeled using analytic functional forms. Finding an acceptable function can be complicated, inefficient and time-consuming. This poster presents a novel approach to estimating the underlying PDF of a 1D dataset of samples using Log...
Go to contribution page -
Noam Levi (Tel Aviv University)Poster
We introduce Noise Injection Node Regularization (NINR), a method that injects structured noise into Deep Neural Networks (DNNs) during the training stage, resulting in an emergent regularizing effect. We present both theoretical and empirical evidence demonstrating substantial improvements in robustness against various test data perturbations for feed-forward DNNs trained under NINR. The...
Go to contribution page -
Noam Levi (Tel Aviv University)Poster
We introduce Noise Injection Node Regularization (NINR), a method that injects structured noise into Deep Neural Networks (DNNs) during the training stage, resulting in an emergent regularizing effect. We present both theoretical and empirical evidence demonstrating substantial improvements in robustness against various test data perturbations for feed-forward DNNs trained under NINR. The...
Go to contribution page -
Heather Battey (Imperial College London)Poster
Consider a binary mixture model of the form $F_\theta=(1−\theta)F_0+\theta F_1$, where $F_0$ is standard normal and $F_1$ is a completely specified heavy-tailed distribution with the same support. Gaussianity of $F_0$ reflects a reduction of the raw data to a set of pivotal test statistics at each site (e.g. an energy level in a particle physics context). For a sample of $n$ independent and...
Go to contribution page -
Heather Battey (Imperial College London)Poster
Consider a binary mixture model of the form , where is standard normal and is a completely specified heavy-tailed distribution with the same support. Gaussianity of reflects a reduction of the raw data to a set of pivotal test statistics at each site (e.g. an energy level in a particle physics context). For a sample of independent and identically distributed values , the maximum likelihood...
Go to contribution page -
Daniel Winterbottom (Imperial College (GB))
-
Daniel Winterbottom (Imperial College (GB))
-
Indranil Das (Imperial College London (GB))Poster
-
Jonathon Mark Langford (Imperial College (GB))
-
Jonathon Mark Langford (Imperial College (GB))
-
Nathan Huetsch (Heidelberg University, ITP Heidelberg)Poster
The matrix element method is the LHC inference method of choice for limited statistics, as it allows for optimal use of available information. We present a dedicated machine learning framework, based on efficient phase-space integration, a learned acceptance and transfer function. It is based on a choice of INN and diffusion networks, and a transformer to solve jet combinatorics. We showcase...
Go to contribution page -
Nathan Huetsch (Heidelberg University, ITP Heidelberg)Poster
The matrix element method is the LHC inference method of choice for limited statistics, as it allows for optimal use of available information. We present a dedicated machine learning framework, based on efficient phase-space integration, a learned acceptance and transfer function. It is based on a choice of INN and diffusion networks, and a transformer to solve jet combinatorics. We showcase...
Go to contribution page -
Henry Aldridge (UCL)Poster
Bayesian model selection provides a powerful framework for objectively comparing models directly from observed data, without reference to ground truth data. However, Bayesian model selection requires the computation of the marginal likelihood (model evidence), which is computationally challenging, prohibiting its use in many high-dimensional Bayesian inverse problems. With Bayesian imaging...
Go to contribution page -
Henry Aldridge (UCL)Poster
Bayesian model selection provides a powerful framework for objectively comparing models directly from observed data, without reference to ground truth data. However, Bayesian model selection requires the computation of the marginal likelihood (model evidence), which is computationally challenging, prohibiting its use in many high-dimensional Bayesian inverse problems. With Bayesian imaging...
Go to contribution page -
Kiyam LinPoster
The standard approach to inference from cosmic large-scale structure data employs summary statistics that are compared to analytic models in a Gaussian likelihood with pre-computed covariance. To overcome many of the idealising assumptions that go into this type of explicit likelihood inference, and to take advantage of the high-fidelity wide field data that Euclid and LSST will provide, we...
Go to contribution page -
Kiyam LinPoster
The standard approach to inference from cosmic large-scale structure data employs summary statistics that are compared to analytic models in a Gaussian likelihood with pre-computed covariance. To overcome many of the idealising assumptions that go into this type of explicit likelihood inference, and to take advantage of the high-fidelity wide field data that Euclid and LSST will provide, we...
Go to contribution page -
Javier Mariño Villadamigo (Institut für Theoretische Physik - University of Heidelberg)Poster
Recent innovations from machine learning allow for data unfolding, without binning and including correlations across many dimensions. We describe a set of known, upgraded, and new methods for ML-based unfolding. The performance of these approaches are evaluated on the same two datasets. We find that all techniques are capable of accurately reproducing the particle-level spectra across complex...
Go to contribution page -
Javier Mariño Villadamigo (Institut für Theoretische Physik - University of Heidelberg)Poster
Recent innovations from machine learning allow for data unfolding, without binning and including correlations across many dimensions. We describe a set of known, upgraded, and new methods for ML-based unfolding. The performance of these approaches are evaluated on the same two datasets. We find that all techniques are capable of accurately reproducing the particle-level spectra across complex...
Go to contribution page -
Javier Mariño Villadamigo (Institut für Theoretische Physik - University of Heidelberg)Poster
Recent innovations from machine learning allow for data unfolding, without binning and including correlations across many dimensions. We describe a set of known, upgraded, and new methods for ML-based unfolding. The performance of these approaches are evaluated on the same two datasets. We find that all techniques are capable of accurately reproducing the particle-level spectra across complex...
Go to contribution page -
Nina ElmerPoster
Estimating uncertainties is a fundamental aspect in every physics problem, no measurements or calculations comes without uncertainties. Hence it is crucial to consider the effect of training a neural network to problems in physics. I will present our work on amplitude regression, using loop amplitudes from LHC processes, as an example to examine the impact of different uncertainties on the...
Go to contribution page -
Nina ElmerPoster
Estimating uncertainties is a fundamental aspect in every physics problem, no measurements or calculations comes without uncertainties. Hence it is crucial to consider the effect of training a neural network to problems in physics. I will present our work on amplitude regression, using loop amplitudes from LHC processes, as an example to examine the impact of different uncertainties on the...
Go to contribution page -
Lars Stietz (Hamburg University of Technology (DE))Poster
Precision measurements at the Large Hadron Collider (LHC), such as the measurement of the top quark mass, are essential for advancing our understanding of fundamental particle physics. Profile likelihood fits have become the standard method to extract physical quantities and parameters from the measurements. These fits incorporate nuisance parameters to include systematic uncertainties. The...
Go to contribution page -
Lars Stietz (Hamburg University of Technology (DE))Poster
Precision measurements at the Large Hadron Collider (LHC), such as the measurement of the top quark mass, are essential for advancing our understanding of fundamental particle physics. Profile likelihood fits have become the standard method to extract physical quantities and parameters from the measurements. These fits incorporate nuisance parameters to include systematic uncertainties. The...
Go to contribution page
Choose timezone
Your profile timezone: