Conveners
Track 1: Computing Technology for Physics Research
- co-chair: Fazhi Qi
- chair: Sioni Summers
Track 1: Computing Technology for Physics Research
- chair: Fazhi Qi
- co-chair: Sioni Summers
Track 1: Computing Technology for Physics Research
- chair: Nicholas Smith
- co-chair: Sioni Summers
Track 1: Computing Technology for Physics Research
- chair: Sioni Summers
- co-chair: Nicholas Smith
Track 1: Computing Technology for Physics Research
- co-chair: Fazhi Qi
- chair: Nicholas Smith
Track 1: Computing Technology for Physics Research
- chair: Nicholas Smith
- co-chair: Fazhi Qi
Track 1: Computing Technology for Physics Research
- co-chair: Nicholas Smith
- chair: Fazhi Qi
We present an MLOps approach for managing the end-to-end lifecycle of machine learning algorithms deployed on FPGAs in the CMS Level-1 Trigger (L1T). The primary goal of the pipeline is to respond to evolving detector conditions by automatically acquiring up-to-date training data, retraining and re-optimising the model, validating performance, synthesising firmware, and deploying validated...
The high-luminosity environment at Belle II leads to growing beam-induced background, posing major challenges for the Belle II Level-1 (L1) trigger system. To maintain trigger rates within hardware constraints, effective background suppression is essential. Hit filtering algorithms based on Graph Neural Networks (GNNs), including the Interaction Network (IN), have demonstrated successful...
Transformers are the state-of-the-art model architectures and widely used in application areas of machine learning. However the performance of such architectures is less well explored in the ultra-low latency domains where deployment on FPGAs or ASICs is required. Such domains include the trigger and data acquisition systems of the LHC experiments.
We present a transformer-based algorithm...
The CMS Experiment at the CERN Large Hadron Collider (LHC) relies on a Level-1 Trigger system (L1T) to process in real time all potential collisions, happeing at a rate of 40 MHz, and select the most promising ones for data acquisition and further processing. The CMS upgrades for the upcoming high-luminosity LHC run will vastly improve the quality of the L1T event reconstruction, providing...
The ATLAS experiment has engaged in a modernization of the reconstruction software to cope with the challenging running conditions expected for HL-LHC operations. The use of the experiment-independent ACTS toolkit for track reconstruction is a major component of this effort, involving the complete redesign of several elements of the ATLAS reconstruction software. This contribution will...
CLUEstering is a versatile clustering library based on CLUE, a density-based weighted clustering algorithm optimized for high-performance computing that supports clustering in an arbitraty. The library offers a user-friendly Python interface and a C++ backend to maximize performance. CLUE’s parallel design is tailored to exploit modern hardware accelerators, enabling it to process large-scale...
The Julia programming language is considered a strong contender as a future language for high-energy physics (HEP) computing. However, transitioning to the Julia ecosystem will be a long process and interoperability between Julia and C++ is required. So far several successful attempts have been made to wrap HEP C++ packages for use in Julia. It is also important to explore the reverse...
Future colliders such as the High Luminosity Large Hadron Collider and Circular Electron Positron Collider will face enormous increase in dataset in the coming decades. Quantum and quantum-inspired algorithms may allow us to overcome some of such challenges. There is an important class of problems the so-called combinatorial optimization problems. They are non-deterministic polynomial time...
The increasing reliance on deep learning for high-energy physics applications demands efficient FPGA-based implementations. However, deploying complex neural networks on FPGAs is often constrained by limited hardware resources and prolonged synthesis times. Conventional monolithic implementations suffer from scalability bottlenecks, necessitating the adoption of modular and resource-aware...
Machine learning model compression techniques—such as pruning and quantization—are becoming increasingly important to optimize model execution, especially for resource-constrained devices. However, these techniques are developed independently of each other, and while there exist libraries that aim to unify these methods under a single interface, none of them offer integration with hardware...
With plans for upgrading the detector in order to collect data at a luminosity up to 1.5×1034 cm-2s-2 being ironed out (Upgrade II - LHC Run5), the LHCb Collaboration has sought to implement new data taking solutions already starting from the upcoming LHC Run4 (2030-2033).
The first stage of the LHCb High Level Trigger (HLT1), currently implemented on GPUs and aiming at reducing the event...
Fast machine learning (ML) inference is of great interest in the HEP community, especially in low-latency environments like triggering. Faster inference often unlocks the use of more complex ML models that improve physics performance, while also enhancing productivity and sustainability. Logic gate networks (LGNs) currently achieve some of the fastest inference times for standard image...
The rising computational demands of growing data rates and complex machine learning (ML) algorithms in large-scale scientific experiments have driven the adoption of the Services for Optimized Network Inference on Coprocessors (SONIC) framework. SONIC accelerates ML inference by offloading tasks to local or remote coprocessors, optimizing resource utilization. Its portability across diverse...
Stable operation of detectors, beams, and targets is crucial for reducing systematic errors and achieving high-precision measurements in accelerator-based experiments. Historically, this stability was achieved through extensive post-acquisition calibration and systematic data studies, as not all operational parameters could be precisely controlled in real time. However, recent advances in...
Abstract:
Optimizing control systems in particle accelerators presents significant challenges, often requiring extensive manual effort and expert knowledge. Traditional tuning methods are time-consuming and may struggle to navigate the complexity of modern beamline architectures. To address these challenges, we introduce a simulation-based framework that leverages Reinforcement Learning (RL)...
The ATLAS detector at CERN and its supporting infrastructure form a highly complex system. It covers numerous interdependent sub-systems and requires collaboration across a team of multi-disciplinary experts. The ATLAS Technical Coordination Expert System provides an interactive description of the technical infrastructure and enhances its understanding. It features tools to assess the impact...
In high energy physics, most AI/ML efforts focus on improving the scientific process itself — modeling, classification, reconstruction, and simulation. In contrast, we explore how Large Language Models (LLMs) can accelerate access to the physics by assisting with the broader ecosystem of work that surrounds and enables scientific discovery. This includes understanding complex documentation,...
For over two decades, computing resources in the Worldwide LHC Computing Grid (WLCG) have been based exclusively on the x86 architecture. However, in the near future, heterogeneous non-x86 architectures are expected to make up a significant portion of the resources available to LHC experiments driven also by their adoption in current and upcoming world-class HPC facilities. In response to this...
Flexible workload specification and management are critical to the success of the CMS experiment, which utilizes approximately half a million cores across a global grid computing infrastructure for data reprocessing and Monte Carlo production. TaskChain and StepChain specifications, responsible for over 95% of central production activities, employ distinct workflow paradigms: TaskChain...
As the HL-LHC prepares to deliver large volumes of data, the need for an efficient data delivery and transformation service becomes crucial. To address this challenge, a cross-experiment toolset—ServiceX—was developed to link the centrally produced datasets to flexible, user-level analysis workflows. Modern analysis tools such as Coffea benefit from ServiceX as the first step in event...
The High Energy Photon Source (HEPS), a new fourth-generation high-energy synchrotron radiation facility, is set to become fully operational by the end of 2025. With its significantly enhanced brightness and detector performance, HEPS will generate over 300 PB of experimental data annually across 14 beamlines in phase I, quickly reaching the EB scale. HEPS supports a wide range of experimental...
In the last decade, the concept of Open Science has gained importance: there is a real effort to make tools and data shareable among different communities, with the goal of making data and software FAIR (Findable, Accessible, Interoperable and Reusable). This goal is shared by several scientific communities, including the Einstein Telescope (ET). ET is the third generation ground-based...
The Jiangmen Underground Neutrino Observatory (JUNO) is an underground 20 kton liquid scintillator detector being constructed in southern China. The JUNO physics program aims to explore neutrino properties, particularly through electron anti-neutrinos emitted from two nuclear power complexes at a baseline of approximately 53 km. Targeting an unprecedented relative energy resolution of 3% at 1...
The LHCb experiment, one of the four major experiments at the Large Hadron Collider (LHC), excels in high-precision measurements of particles that are produced relatively frequently (strange, charmed and bottom hadrons). Key to LHCb's potential is its sophisticated trigger system that enables complete event reconstruction, selection, alignment and calibration in real-time. Through the Turbo...
In this article, we present the High-Performance Output (HiPO) data format developed at Jefferson Laboratory for
storing and analyzing data from Nuclear Physics experiments. The format was designed to efficiently store large
amounts of experimental data, utilizing modern fast compression algorithms. The purpose of this development was
to provide organized data in the output, facilitating...
High-energy physics (HEP) analyses routinely handle massive datasets, often exceeding the available resources. Efficiently interacting with these datasets requires dedicated techniques for data loading and management. Awkward Array is a Python library widely used in high-energy physics (HEP) to efficiently handle complex, irregularly structured...
Based on previous experience with parallel event data processing, a Structure-of-Arrays (SoA) layout for objects frequently performs better than Array-of-Structures (AoS), especially on GPUs. However, AoS is widespread in existing code, and in C++, changing the data layout from AoS to SoA requires changing the data structure declarations and the access syntax. This work is repetitive, time...
RNTuple is ROOT's next-generation columnar format, replacing TTree as the primary storage solution for LHC event data.
After 6 years of R&D, the RNTuple format reached the 1.0 milestone in 2024, promising full backward compatibility with any data written from that moment on.
Designing a new on-disk format does not only allow for significant improvements on file sizes and read/write speed,...
For high-energy physics experiments, the generation of Monte Carlo events, and particularly the simulation of the detector response, is a very computationally intensive process. In many cases, the primary bottleneck in detector simulation is the detailed simulation of the electromagnetic and hadronic showers in the calorimeter system.
ATLAS is currently using its state-of-the-art fast...
The accurate simulation of particle showers in collider detectors remains a critical bottleneck for high-energy physics research. Current approaches face fundamental limitations in scalability when modeling the complete shower development process.
Deep generative models offer a promising alternative, potentially reducing simulation costs by orders of magnitude. This capability becomes...
Next-generation High Energy Physics (HEP) experiments face unprecedented computational demands. The High-Luminosity Large Hadron Collider anticipates data processing needs that will exceed available resources, while intensity frontier experiments such as DUNE and LZ are dominated by the simulation of high-multiplicity optical photon events. Heterogeneous architectures, particularly GPU...
The simulation throughput of LHC experiments is increasingly limited by detector complexity in the high-luminosity phase. As high-performance computing shifts toward heterogeneous architectures such as GPUs, accelerating Geant4 particle transport simulations by offloading parts of the workload to GPUs can improve performance. The AdePT plugin currently offloads electromagnetic showers in...