The accurate simulation of particle showers in collider detectors remains a critical bottleneck for high-energy physics research. Current approaches face fundamental limitations in scalability when modeling the complete shower development process.
Deep generative models offer a promising alternative, potentially reducing simulation costs by orders of magnitude. This capability becomes...
Detailed event simulation at the LHC is taking a large fraction of computing budget. CMS developed an end-to-end ML based simulation framework, called FlashSim, that can speed up the time for production of analysis samples of several orders of magnitude with a limited loss of accuracy. We show how this approach achieves a high degree of accuracy, not just on basic kinematics but on the complex...
We introduce a novel approach for end-to-end black-box optimization of high energy physics (HEP) detectors using local deep learning (DL) surrogates. These surrogates approximate a scalar objective function that encapsulates the complex interplay of particle-matter interactions and physics analysis goals. In addition to a standard reconstruction-based metric commonly used in the field, we...
Given the intense computational demands of full simulation approaches based on traditional Monte Carlo methods, recent fast simulation approaches for calorimeter showers based on deep generative models have received significant attention.
However, for these models to be used in production it is essential for them to be integrated within the existing software ecosystems of experiments. This...
In many domains of science, the likelihood ratio function (LR) is a fundamental ingredient for a variety of statistical methods such as inference, importance sampling, and classification. Neural based LR estimation using probabilistic classification has therefore had a significant impact in these domains, providing a scalable method for determining an intractable LR from simulated datasets via...
Abstract
Optimizing control systems in particle accelerators presents significant challenges, often requiring extensive manual effort and expert knowledge. Traditional tuning methods are time-consuming and may struggle to navigate the complexity of modern beamline architectures. To address these challenges, we introduce a simulation-based framework that leverages Reinforcement...
We present a case for the use of Reinforcement Learning (RL) for the design of physics instruments as an alternative to gradient-based instrument-optimization methods in [arXiv:2412.10237][1]. Its applicability is demonstrated using two empirical studies. One is longitudinal segmentation of calorimeters and the second is both transverse segmentation as well as longitudinal placement of...
We describe a PU-suppression algorithm for the Global trigger using convolutional neural networks. The network operates on cell towers, exploiting both cluster topology and $E_T$ to correct for the contribution of PU. The algorithm is optimised for firmware deployment, demonstrating high throughput and low resource usage. The small size of the input and lightweight implementation enable a high...
In High Energy Physics (HEP), new discoveries can be enabled by the development of new experiments and the construction of new detectors. Nowadays, many experimental projects rely on the deployment of new detection technologies to build large scale detectors. The validation of these new technologies and their large scale production require an extensive effort in terms of Quality Control.
In...
The search for physics beyond the Standard Model remains one of the primary focus in high-energy physics. Traditional searches at the LHC analyses, though comprehensive, have yet to yield signs of new physics. Anomaly detection has emerged as a powerful tool to widen the discovery horizon, offering a model-agnostic path as way to enhance the sensitivity of generic searches not targeting any...
R-parity violating (RPV) SUSY introduces a wide variety of couplings, making it essential to search without limiting target channels and cover signatures as broadly as possible. Among such signatures, multijet final states offer high inclusivity and are especially well-suited for model-independent searches targeting RPV SUSY scenarios.
In this study, we develop a signal discrimination...
Contrastive learning (CL) has emerged as a powerful technique for constructing low-dimensional yet highly expressive representations of complex datasets, most notably images. Augmentation-based CL — a fully self-supervised strategy — has been the dominant paradigm in particle physics applications, encouraging a model to learn useful features from input data by promoting insensitivity to...
Abstract The fields of High-Energy physics (HEP) and machine learning (ML) converge on the challenge of uncertainty-aware parameter estimation in the presence of data distribution distortions, described in their respective languages --- systematic uncertainties and domain shifts. We present a novel approach based on Contrastive Normalizing Flows (CNFs), which achieved top performance on...
Anomaly detection — identifying deviations from Standard Model predictions — is a key challenge at the Large Hadron Collider due to the size and complexity of its datasets. This is typically addressed by transforming high-dimensional detector data into lower-dimensional, physically meaningful features. We tackle feature extraction for anomaly detection by learning powerful low-dimensional...
The phenomena of Jet Quenching, a key signature of the Quark-Gluon Plasma (QGP) formed in Heavy-Ion (HI) collisions, provides a window of insight into the properties of the primordial liquid. In this study, we evaluate the discriminating power of Energy Flow Networks (EFNs), enhanced with substructure observables, in distinguishing between jets stemming from proton-proton (pp) and jets...
The search for resonant mass bumps in invariant-mass histograms is a fundamental approach for uncovering Beyond the Standard Model (BSM) physics at the LHC. Traditional, model-dependent analyses that utilize this technique, such as those conducted using data from the ATLAS detector, often require substantial resources, which prevent many final states from being explored. Modern machine...
Experimental studies of 𝑏-hadron decays face significant challenges due to a wide range of backgrounds arising from the numerous possible decay channels with similar final states. For a particular signal decay, the process for ascertaining the most relevant background processes necessitates a detailed analysis of final state particles, potential misidentifications, and kinematic overlaps...
Deep generative models have become powerful tools for alleviating the computational burden of traditional Monte Carlo generators in producing high-dimensional synthetic data. However, validating these models remains challenging, especially in scientific domains requiring high precision, such as particle physics. Two-sample hypothesis testing offers a principled framework to address this task....
Ensuring reliable data collection in large-scale particle physics experiments demands Data Quality Monitoring (DQM) procedures to detect possible detector malfunctions and preserve data integrity. Traditionally, this resource-intensive task has been handled by human shifters who may struggle with frequent changes in operational conditions. Instead, to simplify and automate the shifters' work,...
The ATLAS detector at the LHC has comprehensive data quality monitoring procedures for ensuring high quality physics analysis data. This contribution introduces a long short-term memory (LSTM) autoencoder-based algorithm designed to identify detector anomalies in ATLAS liquid argon calorimeter data. The data is represented as a multidimensional time series, corresponding to statistical moments...
The PVFinder algorithm employs a hybrid deep neural network (DNN) approach to reconstruct primary vertices (PVs) in proton-proton collisions at the LHC, addressing the complexities of high pile-up environments in LHCb and ATLAS experiments. By integrating fully connected layers with a UNet architecture, PVFinder’s end-to-end tracks-to-hist DNN processes charged track parameters to predict PV...
Machine learning (ML) models are increasingly being used in high-energy physics. However, the selection and training of these models frequently involves human intervention, extensive hyperparameter tuning, and consideration of data changes. These challenges become particularly pronounced when developing models for automated pipelines or fault-tolerant systems. We introduce a novel, automated...
To explain Beyond the Standard Model phenomena, a physicist has many choices to make in regards to new fields, internal symmetries, and charge assignments, collectively creating an enormous space of possible models. We describe the development and findings of an Autonomous Model Builder (AMBer), which uses Reinforcement Learning (RL) to efficiently find models satisfying specified discrete...
The growing luminosity frontier at the Large Hadron Collider is complicating the reconstruction of heavy-hadron collision events both at data acquisition and offline levels with rising particle multiplicities challenging stringent latency and storage requirements. This talk presents significant architectural advancements in Graph Neural Networks (GNNs) aimed at enhancing event reconstruction...
Machine learning model compression methods such as pruning and quantization are critical for enabling efficient inference on resource-constrained hardware. Compression methods are developed independently, and while some libraries attempt to unify these methods under a common interface, they lack integration with hardware deployment frameworks like hls4ml. To bridge this gap, we present PQuant,...