The use of autoencoders for anomaly detection has been extended to many fields of science. Their application in high energy physics is particularly relevant, as a trained model can be used to identify experimental failures, data fluctuations, or—most interestingly—signs of new physics phenomena. In this study, we focus on analyzing event topologies with three leptons, aiming to identify...
The Proton Synchrotron Booster (PSB) accelerates protons with a fundamental radiofrequency (RF) system operating at the revolution frequency, with additional voltage at its second harmonic. Both RF systems are operated in counter-phase for bunch lengthening to reduce space charge effects. To maximise the bunch length , the phase of the voltage at the second harmonic must follow the beam...
Experimental verification of the Higgs trilinear self-coupling is one of the next major challenges of particle physics. While prospects from proton-proton collisions have centred around measuring the on-shell single- and di-Higgs production processes, the off-shell Higgs production process has also been suggested as a complementary channel to resolve the degeneracy in Higgs couplings. We...
For machine learning applications on edge devices, inference speed and hardware resource usage are often limiting factors. These challenges can be mitigated by using model compression techniques such as quantization and pruning. However, these approaches introduce additional hyperparameters that require optimization. Hyperparameter optimization has been widely used to design models with the...
Efficient data processing using machine learning relies on heterogeneous computing approaches, but optimizing input and output data movements remains a challenge. In GPU-based workflows, data already resides on GPU memory, but machine learning models require the input and output data to be provided in sa pecific tensor format, often requiring unnecessary copying outside of the GPU device and...
Alpha Magnetic Spectrometer (AMS-02) is a precision high-energy cosmic-ray experiment on the ISS operating since 2011 and has collected more than 240 billion cosmic ray events. Among them, positrons are important in understanding the particle nature of dark matter. Classifying the positron signals is challenging due to the abundant background of cosmic ray protons. Therefore, we use a...
Machine learning is making its path into natural sciences. A key limitation in ML from a science perspective is the black-box nature of deep neural networks. An alternative is to learn succinct mathematical equations, thus interpretable models, directly from data, allowing for a deeper understanding and scientific reasoning, making the path toward new scientific discovery. Symbolic regression...
This talk presents our work to interpret track segments (stubs) from the muon detectors as nodes of a graph and analyze their structure and reconstruct the muon trajectory using a Graph Neural Network (GNN). As a case study, we focus on the barrel-endcap transition region of the CMS experiment. This GNN also aims at the reduction of computing time, allowing for the integration within the...
With the increasing size of the machine learning (ML) model and vast datasets, the foundation model has transformed how we apply ML to solve real-world problems. Multimodal language models like chatGPT and Llama have expanded their capability to specialized tasks with common pre-train. Similarly, in high-energy physics (HEP), common tasks in the analysis face recurring challenges that demand...
In the end-cap region of the SPD detector complex, particle identification will be provided by a Focusing Aerogel RICH detector (FARICH). FARICH will primarily aid with pion / kaon separation in final open charmonia states (momenta below 5 GeV/c). A free-running (triggerless) data acquisition pipeline to be employed in the SPD results in a high data rate necessitating new approaches to event...
DIPZ is a machine learning algorithm aiming to re-purpose the Deep Impact Parameter Sets (DIPS) jet-flavour taggers to instead regress the jet’s origin vertex position along the beam-line axis. Deployed at the ATLAS High Level Trigger (HLT), the DIPZ labels of each jet in an event are then used in an HLT jet algorithm to construct an event-wide likelihood-based discriminant variable (MLPL),...
The CMS experiment has deployed for the Run 2 LHC data-taking period a Convolutional Neural Network architecture to identify hadronically decaying tau leptons against quark and gluon jets, electrons, and muons: the DeepTau algorithm. For the LHC Run 3, this algorithm saw an important upgrade with the introduction of domain adaptation techniques in order to improve its performance and achieve...
The High-Luminosity Large Hadron Collider (HL-LHC) era promises unprecedented discovery potential but presents significant computational and algorithmic challenges, particularly due to the extreme pileup environment. Accurate and efficient reconstruction of secondary vertices (SVs) originating from the decay of heavy-flavour hadrons or other long-lived particles is critical for key physics...
The interTwin project develops an open-source Digital Twin Engine to integrate application-specific Digital Twins (DTs) across scientific domains. Its framework for the development of DTs supports interoperability, performance, portability and accuracy. As part of this initiative, we implemented the CaloINN normalizing-flow model for calorimeter simulations within the interTwin framework....
The High Luminosity LHC upgrade will require corresponding detector upgrades. At CMS, one of the major improvements will be the new high-granularity endcap calorimeters that will have a much higher granularity, with roughly 3 million hexagonal sensors per endcap having different sizes and thicknesses. Moreover, this detector will provide timing information with an average resolution of ~30ps,...
The design of advanced physics instruments is a complex and resource‐intensive task—one that requires optimizing numerous parameters (such as the sizes and shapes of various elements) to achieve high performance while meeting stringent cost, material, and spatial constraints. Our new work extends the approach presented in arXiv:2412.10237, which leverages Reinforcement Learning (RL) for...
The LHCbFinder project proposes the development of an advanced semantic search and natural language knowledge retrieval system to transform information discovery within the LHCb experiment. It is designed to transform knowledge discovery within the LHCb experiment by tackling fragmented knowledge, undocumented institutional knowledge, and steep learning curves for newcomers. By integrating...
New physics searches in the highly boosted regime are an essential part of LHC's physics program, aiming at revealing the presence of new heavy resonances predicted by many Beyond Standard Model theories on the high-end of LHC's energy reach.
Within the CMS collaboration numerous jet tagging algorithms have been developed for the identification of hadronic jets originating from the decay of...
The High-Luminosity LHC (HL-LHC) will significantly extend the physics reach of the ATLAS experiment, offering increased sensitivity to rare processes and precision measurements. To cope with the corresponding rise in data rates and radiation levels, major upgrades to the ATLAS experiment are required to maintain its performance. One of these upgrades involves replacing the readout electronics...
The R³B experiment at FAIR investigates nuclear reactions induced by high-energy radioactive beams. A key detector of this experiment is the CALIFA calorimeter, which consists of 2544 CsI(Tl) scintillator crystals, for the detection of gamma rays and light charged particles with high angular resolution and precise Doppler correction.
Accurate cluster reconstruction from sparse hit patterns,...
The CMS Pixel Detector in Run 3 (about 2 thousand silicon modules) has a fundamental role in tracking and vertexing. Given the detector's aging and potential operational incidents, constant monitoring of its components is essential to ensure the highest data quality. Typically, the Offline Data Quality Monitoring for the CMS Tracker relies on the human inspection of hundreds of histograms, to...
Searches for new particles often span a wide mass range, where both signal and SM background shapes vary significantly. We introduce a multivariate method that fully exploits the correlation between signal and background features and the explored mass scale. The classifiers—either a neural network or boosted decision tree—produce continuous outputs across the full mass range, achieving...
We present an ML-based particle flow algorithm for the CLD detector at the FCC-ee. Particle candidates are built from hits and fitted tracks, both of which are represented as a graph. A geometric algebra transformer is then trained using object condensation loss to reconstruct a set of particle candidates from the hits and tracks. In the second step, additional heads are used to estimate the...
During the data-taking campaigns Run 1 and Run 2, the ALICE collaboration recorded a large amount of proton-proton (pp) collisions across a variety of center-of-mass energies ($\sqrt{s\,}$). This extensive dataset is well suited to study the energy dependence of particle production. Deep neural networks (DNNs) provide a powerful regression tool to capture underlying multidimensional...
Detecting subtle new physics signals, such as those predicted by the Standard Model Effective Field Theory (SMEFT) with small Wilson coefficients, is inherently challenging when individual event-level kinematic differences are marginal. Since all collision events are governed by the same underlying physics parameters, we investigate the predictive power of permutation-invariant neural network...
Particle physics experiments rely on the (generalised) likelihood ratio test (LRT) for searches and measurements. This is not guaranteed to be optimal for composite hypothesis tests, as the Neyman-Pearson lemma pertains only to simple hypothesis tests. An improvement in the core statistical testing methodology would have widespread ramifications across experiments. We discuss an alternate test...
Machine Learning has been an important tool across experiments at the LHC, supporting tasks ranging from simulation and event reconstruction to anomaly detection and physics analysis. These applications demand inference paradigms that are not only efficient and low in latency but also seamlessly integrable into high-energy physics (HEP) workflows. While numerous frameworks exist for the...
The SHiP experiment is a proposed fixed-target experiment at the CERN SPS aimed at searching for feebly interacting particles beyond the Standard Model. One of its main challenges is reducing the large number of muons produced in the beam dump, which would otherwise create significant background in the detector. The muon shield, a system of magnets designed to deflect muons away from the...
Rare event classification in high-energy physics (HEP) plays a crucial role in probing physics beyond the Standard Model (BSM). Such processes serve as indirect searches for new physics by testing deviations from SM predictions in extreme kinematic regimes. The production of four top quarks in association with a ($W^-$) boson at $(\sqrt{s} = 13)$ $ TeV$ is an exceptionally rare SM process with...
Muon tomography leverages the small, continuous flux of cosmic rays produced in the upper atmosphere to measure the density of unknown volumes. The multiple Coulomb scattering that muons undergo when passing through the material can either be leveraged or represent a measurement nuisance. In either case, the scattering dependence on muon momentum is a significant source of imprecision. This...
I will present recent advancements in developing inclusive, large-scale pretrained models for large-radius jets at the LHC's general-purpose experiments. The discussion will begin with the Sophon model, trained on Delphes datasets as a demonstrative benchmark, and extend to the Global Particle Transformer (GloParT) models, which have been developed and deployed within CMS over the past...
As the High-Luminosity LHC (HL-LHC) era approaches, significant improvements in reconstruction software are required to keep pace with the increased data rates and detector complexity. A persistent challenge for high-throughput GPU-based event reconstruction is the estimation of track parameters, which is traditionally performed using iterative Kalman Filter-based algorithms. While GPU-based...