Connecting The Dots 2025
FUKUTAKE Learning Theater
The University of Tokyo
Short url: https://indico.cern.ch/e/CTD_2025
The Connecting The Dots workshop will occur from 11 to 14 November, 2025, at the University of Tokyo, Hongo area which is located in the center of Tokyo city. A mini-workshop on emerging techniques in HEP will be held on November 10.
It is the 9th in the series after: Berkeley 2015, Vienna 2016, LAL-Orsay 2017, Seattle 2018, Valencia 2019, virtual 2020, Princeton 2022 and Toulouse 2023.
Main venue is the FUKUTAKE Learning Theater.
Announements about this workshop can be subscribed by joining an e-group.
A reminder of the CERN code of conduct: "Be excellent to each other".
This workshop is supported by the University of Tokyo-Princeton Partnership, IRIS-HEP, JSPS Core-to-Core Program and ASPIRE (JPMJAP2316) by JST.
-
-
1
WelcomeSpeaker: Koji Terashi (University of Tokyo (JP))
-
SessionConvener: Koji Terashi (University of Tokyo (JP))
-
2
The CERN Quantum Technology Initiative: initial results and research perspective on Quantum Computing for HEP
The CERN Quantum Technology Initiative (QTI) was launched in 2020 with the aim of investigating the role that quantum technologies could have within the High Energy Physics (HEP) research program. During this initial exploratory phase a set of results were gathered, outlining benefits, constraints and limitations of introducing technologies in different HEP domains, from advanced sensor for next generation detectors, to computing. These findings have been used to define of a longer-term research plan, closely aligned with the technological development of quantum infrastructure and the HEP priorities.
The CERN QTI has now entered its second phase, dedicated to extending and sharing technologies uniquely available at CERN, while boosting development and adoption of quantum technologies in HEP and beyond.
This talk will summarize the experience accumulated through the past years, outlining the main QTI research results, focusing in particular the field of quantum computing, and provide a perspective of future research directions.Speaker: Dr Sofia Vallecorsa (CERN) -
3
Scalable Track Reconstruction at the LHC with QAOA
Reconstructing the trajectories of charged particles as they traverse several detector layers is a key ingredient for event reconstruction at LHC and virtually any particle physics experiment. The limited bandwidth available, together with the high rate of tracks per second O($10^{10}$) - where each track consists of a variable number of measurements - makes this problem exceptionally challenging from the computational perspective. With this in mind, Quantum Computing is being explored as a new technology for future detectors, where larger datasets will further complicate this task [1].
Several quantum algorithms have been explored in this regard - e.g., Variational algorithms and HHL [2][3] - offering a heterogeneous set of advantages and disadvantages. In this talk, an extensive study using the Quantum Approximate Optimization Algorithm (QAOA) for track reconstruction at LHC will be presented. This algorithm is focused on finding the ground state for combinatorial problems, thus making it a natural choice. Furthermore, the robustness of QAOA to hardware noise when compared to other algorithms makes it a good candidate for the near-term utility era in Quantum Computing. In this talk, implementations with simplified simulations will be presented, both for QAOA and a modified version of the algorithm that could improve performance in comparison with Quantum annealers as per recent Q-CTRL results [4]. Finally, a complete study of hardware requirements, prospects on improving scalability, and energy consumption for different technologies will also be discussed.
[1] QC4HEP Working Group, A. Di Meglio, K. Jansen, I. Tavernelli, J. Zhang et al., Quantum Computing for High-Energy Physics: State of the Art and Challenges. Summary of the QC4HEP Working Group, PRX Quantum 5 (2024) 3, 037001, arXiv:2307.03236 (2023).
[2] A. Crippa, L. Funcke, T. Hartung, B. Heinemann, K. Jansen, A. Kropf, S. Kühn, F. Meloni, D. Spataro, C. Tüysüz, Y. C. Yap, Quantum Algorithms for Charged Particle Track Reconstruction in the LUXE Experiment, Comput Softw Big Sci 7, 14, arXiv:2304.01690 (2023).
[3] D. Nicotra, M. Lucio Martinez, J. A. De Vries, M. Merk, K. Driessens, R. L. Westra, D. Dibenedetto, D. H. Campora Perez, A quantum algorithm for track reconstruction in the LHCb vertex detector, JINST 18 P11028 (2023).
[4] N. Sachdeva, G. S. Hartnett, S. Maity et al., Quantum optimization using a 127-qubit gate-model IBM quantum computer can outperform quantum annealers for nontrivial binary optimization problems,arXiv:2406.01743v4 (2024)Speaker: Miriam Lucio Martinez (Univ. of Valencia and CSIC (ES)) -
4
Quantum-Enhanced Graph Neural Networks for Particle Tracking in High Energy Physics
Graph Neural Networks (GNNs) have emerged as powerful tools for particle tracking in High Energy Physics (HEP), effectively modeling the complex relational structure of detector hits. Recent progress in quantum computing raises the possibility that quantum circuits, leveraging entanglement and superposition, could enhance GNNs by capturing intricate patterns in tracking data. However, the practical advantages of quantum-enhanced GNNs in HEP applications remain largely unexplored.
In this work, we explore a hybrid GNN architecture that integrates variational quantum circuits for edge classification in particle tracking tasks. Our model embeds variational quantum layers within a multilayer perceptron framework, systematically varying the number of qubits and quantum layers to evaluate scalability and expressive power. We benchmark performance against a classical GNN with an identical architecture, replacing quantum components with classical counterparts to ensure fair comparison.Performance is evaluated in terms of tracking accuracy, F1 score, AUC-ROC, computational efficiency (training and inference time), and parameter count. Preliminary results suggest that quantum layers can outperform classical counterparts in scenarios involving sparse tracking graphs, albeit at increased computational cost. These findings offer new insights into the potential of quantum machine learning for HEP experiments, and highlight future directions for scalable quantum algorithms in particle tracking and beyond.
Speaker: Santosh Parajuli (Univ. Illinois at Urbana Champaign (US))
-
2
-
10:40
Coffee
-
SessionConvener: Ryu Sawada (University of Tokyo (JP))
-
5
QC simulation (tentative title)Speaker: Yutaro Iiyama (University of Tokyo (JP))
-
6
Track reconstruction at collider experiments using quantum-inspired simulated annealing
Quantum computing technologies are emerging as promising tools to enhance computational speed and reduce resource demands in high-energy accelerator experiments. In particular, GPU-based quantum annealing simulations inspired by quantum annealing principles have reached a practical level of hardware development. They are now being explored for their potential in track reconstruction tasks.
In this study, we employed a fully connected bit model capable of supporting 262k-bit inputs, utilizing a GPU-based quantum annealing simulator. We formulated a Hamiltonian using a QUBO (Quadratic Unconstrained Binary Optimization) model, taking doublets: pairs of hits as inputs. This Hamiltonian was designed such that its minimum corresponds to combinations of doublets that best represent actual particle tracks. The simulator then searched for such combinations that minimize the Hamiltonian.
This talk presents the optimization of the Hamiltonian and performance evaluation of the track reconstruction using Monte Carlo simulation samples from the LHC-ATLAS experiment. We also assess the applicability of this approach to the direct search for long-lived particles by evaluating the reconstruction performance of exceptionally short tracks composed of only four hits from the innermost detector layers.Speaker: Daiya Akiyama (Waseda University (JP)) -
7
Quantum Chebyshev Probabilistic Models for Fragmentation Functions
We propose a quantum protocol for efficiently learning and sampling multivariate probability distributions that commonly appear in high-energy physics. Our approach introduces a bivariate probabilistic model based on generalized Chebyshev polynomials, which is (pre-)trained as an explicit circuit-based model for two correlated variables, and sampled efficiently with the use of quantum Chebyshev transforms. As a key application, we study the fragmentation functions~(FFs) of charged pions and kaons from single-inclusive hadron production in electron-positron annihilation. We learn the joint distribution for the momentum fraction $z$ and energy scale $Q$ in several fragmentation processes. Using the trained model, we infer the correlations between $z$ and $Q$ from the entanglement of the probabilistic model, noting that the developed energy-momentum correlations improve model performance. Furthermore, utilizing the generalization capabilities of the quantum Chebyshev model and extended register architecture, we perform a fine-grid multivariate sampling relevant for FF dataset augmentation. Our results highlight the growing potential of quantum generative modeling for addressing problems in scientific discovery and advancing data analysis in high-energy physics.
Speaker: Dr Sofia Vallecorsa (CERN)
-
5
-
12:40
Lunch
-
SessionConvener: David Lange (Princeton University (US))
-
8
LLM in HEP (tentative title)Speaker: Masahiro Morinaga (University of Tokyo (JP))
-
9
Token-Based Transformer Models for Pattern Recognition on Point Clouds
Building on the success of TrackingBERT (arXiv:2402.10239) and TrackingSorter (arXiv:2407.21290) , we propose a unified approach to address the track finding problem and extend beyond it. Our method integrates the latent representation of detector hits learned from TrackBERT as additional feastures in the TrackSorter algorithm.Furthermore, we replace the detector module ID-based tokenization scheme with a machine learning-driven scheme that better captures the spatial information. Results on the full-scale TrackML dataset will be presented, demonstrating improvements in both physics and computing performance. We also highlight the extensibility of this approach to simulate particle interactions with the detector and incorporating calorimeter data for holistic reconstruction.
Speaker: Xiangyang Ju (Lawrence Berkeley National Lab. (US)) -
10
Semi-Supervised Transfer Learning with Convolutional Autoencoders for Hybrid Pixel Detectors
Hybrid pixel detectors such as Timepix3 and Timepix4 enable pixel-level resolution of individual particle interactions, where each event manifests as a cluster or track spanning multiple pixels. Analyzing these clusters allows for the estimation of key particle parameters, including type, initial energy, and angle of incidence. However, such ground-truth parameters are typically unavailable during data acquisition, necessitating the use of simulations, which are often computationally intensive and may fail to capture detector imperfections and noise. A central challenge is thus leveraging unlabeled experimental data alongside simulations for training. To address this, we propose a semi-supervised approach that pretrains convolutional autoencoders—specifically U-Net architectures with EfficientNet backbones—on unlabeled measured data. These models are subsequently fine-tuned on labeled simulations for classification and regression tasks. To interpret the learned latent representations, we employ t-distributed Stochastic Neighbor Embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP). Finally, we show that self-supervised pertaining improves the classification of simulated protons and electrons and enhances incidence angle regression within the Space Application of Timepix Radiation Monitor (SATRAM) context.
Speaker: Mr Tomáš Čelko (Charles University) -
11
ML-Assisted Tracking in the ATLAS Muon Spectrometer
The High Luminosity Large Hadron Collider (HL-LHC), scheduled to begin operation after 2030, will increase the number of proton-proton collisions per event from approximately 60 to up to 200. This rise in interaction density will substantially elevate the occupancy within the ATLAS Muon Spectrometer, necessitating more efficient and robust real-time data processing strategies for the Event Filter.
To address these challenges on the algorithmic side, we present a comparative study of state-of-the-art graph neural network machine learning (ML) architectures designed to reject hits from background sources such as hadronic punch-throughs and gamma radiation within the ATLAS experiment’s Muon Spectrometer. These architectures are designed to assist baseline reconstruction algorithms while maintaining or improving physics performance.
A central contribution of our work is the use of a novel Fourier series-inspired encoding scheme for continuous input features. Across all tested ML architectures, our Fourier encoding yields improvements in noise rejection and achieves lower computational cost compared to traditional ML encoding types such as linear and other hybrid methods.
In addition, we show first results on the use of Masked Transformer models for end-to-end muon tracking within the Muon Spectrometer as an alternative approach to the baseline reconstruction. These models are inspired by recent breakthroughs in computer vision, in particular detection.Speaker: Jonathan Renusch (CERN | Technical University of Munich (TUM))
-
8
-
15:55
Coffee
-
SessionConvener: Peter Elmer (Princeton University (US))
-
12
Beyond-CMOS Systems for Fast Machine Learning in Physics
The implementation of neural networks and artificial intelligence on hardware accelerators is a key element for the development of the trigger strategy of future high-energy physics detectors. In parallel, the development of hardware technologies capable of maximizing AI performance and adapting to its computational needs is crucial, particularly in scenarios requiring the efficient processing of large data volumes with low latency and optimized energy consumption.
Memristors represent a new, promising technology to develop efficient in-memory computing for on-detector data reduction, feature extraction, and triggering.
They can form the basis of neuromorphic computing architectures, enabling the highly-efficient execution of fundamental neural network operations, such as matrix multiplications. By combining memory and computation within the same device, memristors significantly reduce both latency and energy consumption compared to traditional CPU- or GPU-based architectures.The presentation will provide an overview of memristor technology and its key characteristics, highlighting the potential of this approach for enhancing data processing systems in advanced experimental scenarios. Ongoing developments will be presented on a hybrid architecture based on Field-Programmable Gate Arrays (FPGAs) offloading computational tasks to memristive crossbars. Preliminary results of the Beyond-CMOS Systems for Fast Machine Learning in Physics (BEEM) project will be discussed, illustrating the possible applications of this solution in graph-neural-network-based tracking problems for next-generation experiments.
Speaker: Dr Valerio Ippolito (INFN Sezione di Roma (IT)) -
13
GPT-like transformer model for silicon tracking detector simulation
Simulating physics processes and detector responses is essential in high energy physics but accounts for significant computing costs. Generative machine learning has been demonstrated to be potentially powerful in accelerating simulations, outperforming traditional fast simulation methods. While efforts have focused primarily on calorimeters initial studies have also been performed on silicon tracking detectors.
This work employs the use of GPT-like transformer architecture in a fully generative way ensuring full correlations between individual hits. Taking parallels from text generation hits are represented as a flat sequence of feature values. The resulting tracking performance, evaluated on the Open Data Detector, is comparable with the full simulation.
Speaker: Tadej Novak (Jozef Stefan Institute (SI)) -
14
DNN-based particle flow and jet flavor tagging for Higgs factories
Electron-positron Higgs factories, such as ILC and FCCee, are the next-generation collider projects to reveal properties of Higgs and other particles in great details to observe BSM effects. Proposed detectors are designed to maximize obtained information from incident particles with highly granular detector elements. DNN-based reconstruction algorithms are critical to derive properties of particles from signal from such detectors.
In this talk I will present two studies on ILD full detector simulation samples. One is a DNN-based particle flow algorithm, which derives properties of charged and neutral particles from detector hits and tracks. GravNet is utilized as the main component of the network, and object condensation loss function is adopted to calculate output coordinate and condensation points. Detailed performance comparison with non-ML algorithm (PandoraPFA) will be shown. Another is a jet flavor tagging algorithm based on Particle Transformer which already gives almost ten-times better rejection ratio than previous algorithm based on BDT. Comparison between full and fast simulation is discussed, especially on dependence on sample size. Impact on physics studies will also be reviewed.
Speaker: Taikan Suehara (ICEPP, The University of Tokyo (JP)) -
15
Rapid ML inference using logic gate neural nets
Fast machine learning (ML) inference is of great interest in the HEP community, especially in low-latency environments like triggering. Faster inference often unlocks the use of more complex ML models that improve physics performance, while also enhancing productivity and sustainability. Logic gate networks (LGNs) currently achieve some of the fastest inference times for standard image classification tasks. In contrast to traditional neural networks, each node consists of a learnable logic gate. While this is generally slow when training, inference allows for a network that is implicitly pruned and discretized. LGNs are excellent candidates for FPGA implementations as they consist of logic gates, but they are also suitable for GPUs. In this work, we present our implementation of logic gate convolutional neural nets. We apply them to open data for anomaly detection, similar to that used at the CMS Level-1 Trigger by the CICADA collaboration. We demonstrate that LGNs offer comparable physics performance to existing methods while promising a much faster inference speed. This opens the door to broader applications of LGNs in fast ML workflows across HEP.
Speaker: Liv Helen Vage (Princeton University (US))
-
12
-
1
-
-
16
WelcomeSpeaker: Masaya Ishino (University of Tokyo (JP))
-
17
IntroductionSpeaker: Ryu Sawada (University of Tokyo (JP))
-
SessionConvener: Alexis Vallier (L2I Toulouse, CNRS/IN2P3, UT3)
-
18
A Fully GPU-Based Track Reconstruction Pipeline for HEP Experiments
The ATLAS-internally-know G-200 pipeline represents a major milestone in the evolution of online track reconstruction for the experiment, enabling the entire reconstruction chain at trigger level to be executed on GPU architectures. G-200 is the ATLAS ITk-specific implementation of the Traccc framework, which is part of ACTS GPU R&D project and designed to be detector agnostic. Traccc leverages state-of-the-art, GPU-optimized libraries—Detray, Vecmem, and Covfie—to perform clusterization, seeding, track finding, and fitting in a massively parallel fashion.
Speaker: Neza Ribaric (Duke University (US)) -
19
Track fitting at the full LHC collision frequency: Design and performance of the GPU-based Kalman Filter at the LHCb experiment
The LHCb experiment at the Large Hadron Collider (LHC) operates a fully software-based trigger system that processes proton-proton collisions at a rate of 30 MHz, reconstructing both charged and neutral particles in real time. The first stage of this trigger system, running on approximately 500 GPU cards, performs a track pattern recognition to reconstruct particle trajectories with low latency.
Starting with the 2025 data-taking period, a novel approach has been introduced for precise track parameter estimation: a custom Kalman Filter, highly optimized for GPU execution, is now employed to fit tracks at the full collision rate of 30 MHz. This implementation leverages dedicated parametrizations of material interactions and the magnetic field to meet stringent throughput requirements.
This is the first time such a high-precision track fitting is performed at the full LHC collision frequency in any experiment. The result is a marked improvement in the momentum and mass resolution, increased robustness against detector misalignments, and a reduced rate of fake tracks. Moreover, this development represents a critical step toward future full event reconstruction at the LHC collision rate.
In this talk, we will outline the requirements for real-time track fitting in LHCb’s first-level trigger, detail the implementation of the Kalman Filter on GPUs, and present a comprehensive performance evaluation using the 2025 data set.Speaker: Michel De Cian (The University of Manchester (GB)) -
20
Improvements of the ALICE GPU TPC tracking and GPU framework for online and offline processing of Run 3 Pb—Pb data
ALICE is the dedicated heavy ion experiment at the LHC at CERN and records lead-lead collisions at a rate of up to 50 kHz in LHC Run 3. To cope with such collision and data rates, ALICE uses a GEM TPC with continuous readout and a GPU-based online computing farm for data compression. Operating the first GEM TPC of this size with large space charge distortions due to the high collision rate has many implications on the track reconstruction algorithm, anticipated and unanticipated ones. With real Pb—Pb data available, the TPC tracking algorithm needed to be refined, particularly with respect to improved cluster attachment at the inner TPC region. In order to use the online-computing farm efficiently for offline processing when there is no beam in the LHC, ALICE is currently running TPC tracking on GPUs also in offline processing. For the future, ALICE aims to run more computing steps on the GPU, and aims to use other GPU-enabled resources besides its online computing farm. These aspects along with better possibilities for performance optimizations led to several improvements of the GPU framework and GPU tracking code, particularly using Run Time Compilation (RTC). The talk will give an overview of the improvements for the ALICE tracking code, mostly based on experience from reconstructing real Pb—Pb data with high TPC occupancy. In addition, an overview of the online and offline processing status on GPUs will be given, and an overview of how RTC improves ALICE tracking code and GPU support.
Speaker: David Rohr (CERN)
-
18
-
10:50
Coffee
-
SessionConvener: Andreas Salzburger (CERN)
-
21
FPGA-accelerated track reconstruction and clustering using classical and machine-learning approaches for the ATLAS Event Filter
As part of the ATLAS Phase-II upgrade, the Event Filter (EF) Tracking project is exploring heterogeneous computing systems. This EF system will receive events selected by the L0 trigger and process the data from the ATLAS detector at a maximum rate of 1 MHz. FPGA-based implementations are being developed for various stages of track reconstruction algorithms such as pixel clustering, strip clustering, local to global coordinate conversion, Hough transform, and duplicate removal. We have developed a pixel clustering kernel and benchmarked it with an AMD Alveo U250 acceleration card. This kernel is implemented as RTL code. It processes input pixel hits from an AXI-stream interface, generates pixel clusters, and computes their centroids. Data transfer between the kernel and the host is handled through the global memory. Compared to CPU-based clustering, the FPGA implementation offers faster processing and better power efficiency. The pixel clustering kernel has been validated through both hardware emulation and with the U250 cards. It’s been integrated into a pipeline with other kernels for track reconstruction.
Speaker: Julian Wollrath (CERN) -
22
The Tiny Triplet Finder - a low silicon resource track finding scheme and its test implementation in FPGA Remote
Remote
In high-energy physics experiment trigger systems, track segment seeding is a resource-intensive function. The primary reason lies in the high computational complexity of the segment-finding process— O(n³) in software implementations using nested loops, and O(n) × O(N²) in typical FPGA implementations, where n is the number of hits per detector layer in an event, and N is the number of bins within the coincidence search range. As Moore’s Law approaches its physical limits, simply piling up silicon resources is becoming less viable. Instead, reducing computational complexity—akin to how the FFT replaced the direct DFT in history —deserves serious consideration.
The Tiny Triplet Finder is a track segment recognition scheme that groups three or more hits satisfying a constraint, such as forming a straight line in the non-bend view or a circular arc in the barrel region of a solenoidal magnetic field (passing through the z-axis). It achieves the O(n³) segment-finding function in O(n) time (specifically, in 2n + 1 clock cycles), enabling applications that require real-time or online track finding. Key features of the Tiny Triplet Finder include:
-
Extremely low silicon resource usage:
In FPGA implementations, the logic element usage scales as O(N × log N), significantly reduced from the O(N²) in typical conventional track segment recognition designs. -
True triplet coincidence with a wider search range:
Due to its minimal resource consumption, the Tiny Triplet Finder eliminates the need for a preliminary “pairing” stage. This allows users to implement a relatively wider coincidence search range, unlike typical approaches which must restrict the range to reduce number of fake pairs. The wider search range permits various useful performance such as to find low-pₜ tracks. -
Support for higher-order coincidences:
While the term "triplet" refers to the minimum number of hits required to confidently identify a track, the scheme is not limited to three hits. In practice, it can handle quadruplet, quintuplet, or higher-order coincidences when more detector layers are available, improving fake track rejection in high-luminosity conditions. -
Not restricted by detector geometries:
The Tiny Triplet Finder functions as a general-purpose coincidence search engine adaptable to various detector geometries. -
Suitable for FPGA implementation:
The core of the Tiny Triplet Finder is a wide-range, single-clock-cycle shifter that performs necessary functions and consumes only O(N × log N) resources. This shifter can be implemented using standard FPGA components such as multipliers and block RAMs, reducing the need for logic elements and further conserving silicon area and power.
As a proof of concept, a 3D track segment seeding engine based on the Tiny Triplet Finder has been implemented and tested on a low-cost FPGA device. This engine preselects and groups detector hits (or stubs) for input into a subsequent track-fitting stage (e.g., a Kalman filter). The engine uses a Hough transform space in the r-z view and the Tiny Triplet Finder in the r-φ view to apply full 3D constraints. Organized as a pipeline, it processes each hit in a single clock cycle.
Moreover, the Tiny Triplet Finder serves as a general-purpose coincidence detection algorithm. To demonstrate its versatility, we tested its track-segment finding performance on two distinct detector geometries. In a collider-style barrel-layer geometry, we analyzed fake segment rates under 3D (r-φ + r-z) and 2D (r-φ or r-z only) configurations for high hit multiplicity events (>4000 hits per layer in the barrel region). In a second geometry with strip plane layers that include timing information, we studied both real and fake coincidences—with and without timing (“3D” or “2D”)—at various hit multiplicities. The seeding engine is capable of processing up to 112 hits (real and fake combined) per layer per event in 225 clock cycles.
Speaker: Dr Jinyuan Wu (Fermi National Accelerator Lab. (US)) -
-
23
An ultrafast FPGA fitter for the LHCb Downstream Tracker
After an intensive R&D programme, LHCb approved building of the Downstream Tracker (DWT): a system based on the ''artificial retina'' to pre-reconstruct tracks in the SciFi (the detector downstream to the magnet) at readout level during Run 4. Running before any trigger level, it has to process events at the average LHC crossing rate of 30 MHz.
The ''artificial retina'' is an extremely parallel tracking architecture that ensures low-latency and high-throughput when implemented on FPGAs. Pattern recognition is performed by a set of elemental processing units (cells) specialised in reconstructing tracks near predefined reference tracks. In the DWT implementation, each cell with a track candidate returns as output a fixed-length set of hits that potentially belong to the track.
However, in high-density applications, the output of this ultrafast parallel stage may not be sufficient for an effective pattern recognition; or a more precise parameter evaluation may be necessary as input to further processing. The addition of a fitting stage is therefore necessary, but matching the speed of the fully parallel cell-based pattern-finding stage is a challenge: the fitter stage has to cope with a track candidate rate $\mathcal{O}(30$ GHz$)$, while keeping the latency lower than 1 $\mu$s. In this talk we will discuss how we implemented on FPGA a fully pipelined $\chi^2$ fitter for the DWT that picks the right hits combination from each set, or entirely rejects the track candidate.
This approach allows to keep under control the combinatorial nature of the pattern recognition and achieve an $\mathcal{O}(n)$ time complexity, a feature extremely appealing for LHCb ''Upgrade-II'' (Run 5) that will collect data at the luminosity of $\mathcal{L} = 1.5\times10^{34}\,\mathrm{cm}^{-2}~\mathrm{s}^{-1}$, a factor 7.5 times larger than the Run 4 one.
Speaker: Federico Lazzari (Universita di Pisa & INFN Pisa (IT)) -
24
A Geometry Agnostic Heterogenous Framework for Track Reconstruction for HEP Experiments
The future development projects for the Large Hadron Collider towards HL-LHC will constantly bring nominal luminosity increase, with the ultimate goal of reaching, e.g., a peak luminosity of $5 \cdot 10^{34} cm^{−2} s^{−1}$ for ATLAS and CMS experiments. This rise in luminosity will directly result in an increased number of simultaneous proton collisions (pileup), up to 200, that will pose new challenges for track reconstruction in such a dense environment.
In response to these challenges, many experiments have started rewriting an increasing fraction of their track reconstruction software to run on heterogeneous architectures. While very successful in some cases, most of the time these efforts have stayed confined to single experiment projects.
In this work we will show the potentiality of a unique standalone software, running on multiple backends (CPUs, NVIDIA GPUs and AMD GPUs) aiming at the reconstruction of the tracker detector of multiple HEP experiments with a cylindrical geometry. We will discuss both the physics and computational performance for different detectors.
This represents the first step towards a unique standalone tool capable of carrying out the reconstruction of a model detector for HL-LHC leveraging on heterogeneous resources. A detector defined solely by its constituent elements: a silicon tracker, at least one calorimeter and a muon detector.
Speaker: Adriano Di Florio (CC-IN2P3)
-
21
-
13:00
Lunch
-
SessionConvener: Markus Elsing (CERN)
-
25
Optimization and Outer Tracker Extension of the Cellular Automaton Algorithm for CMS Phase-2 Tracking
The Phase-2 Upgrade for the CMS experiment at the HL-LHC brings with it new and improved detectors, including a new tracker. It will provide the means to reconstruct higher-quality tracks in the pixel detector, especially in the endcaps (pseudorapidity |η| ≳ 2), by allowing to record hits in many more layers than what is currently possible. The Phase 2 CMS tracking at the HLT currently plans to take advantage of complementary methods to reconstruct tracks in the inner and outer tracker, with the former using a Cellular Automaton (CA) algorithm for pattern recognition producing tracks with at least four hits, and the latter relying on Line Segment Tracking (LST) together with mkFit: a CMS-developed vectorizable and parallel alternative to the traditional Combinatorial Kalman Filter (CKF). This contribution focuses on the CA algorithm, with its optimization for Phase 2 and, in particular, its extension to the first layers of the outer tracker. This extension aims to improve tracks in the barrel region of the pixel detector (pseudorapidity |η| ≲ 2), where more tracks are expected to have only three hits, specifically avoiding the significant drawback of increased fake and duplicate tracks produced by other attempts to recoup efficiency.
Speaker: Jan Schulz (Rheinisch Westfaelische Tech. Hoch. (DE)) -
26
TICL: A Reconstruction Framework for the CMS Phase-2 High-Granularity Calorimeter Endcap
The High-Luminosity LHC (HL-LHC) era presents new challenges for the CMS detector. To address them, the endcap calorimeters will be replaced with a High-Granularity Calorimeter (HGCAL) that provides exceptional spatial resolution and precise timing. A new reconstruction framework, The Iterative Clustering (TICL), is being developed in CMS Software (CMSSW) to exploit HGCAL’s features and integrate information from the Tracker and Mip Timing Detector. TICL's goal is to mitigate dense pile-up and deliver a comprehensive event interpretation.
TICL is designed for heterogeneous computing, using the Alpaka performance portability library to enable massive parallelism. It processes hundreds of thousands of energy deposits with specialized clustering algorithms that reduce complexity while preserving crucial physics information. Pattern recognition focuses on efficient 3D shower reconstruction, keeping pile-up contamination low. An additional linking step recovers fragmented clusters, with targeted algorithms for electrons, photons, and hadrons. Final charged candidates are formed by linking tracks and 3D clusters, leveraging timing from both HGCAL and MTD. This presentation will showcase TICL's design, physics performance, and computational strategies for the HL-LHC.
Speaker: Felice Pantaleo (CERN) -
27
Muon Tracking at the CMS High-Level Trigger for the HL-LHC Upgrade
The High-Luminosity Upgrade for the LHC is rapidly approaching and the CMS experiment is undergoing fundamental changes to take advantage of the new physics possibilities offered by the collider’s upgrade. In this context, the event reconstruction, both offline and at the High-Level Trigger (HLT), is undergoing significant changes to fully exploit the detectors’ upgrades while aiming to maintain remarkable performance under harsh data-taking conditions: roughly 200 proton-proton collisions take place at every HL-LHC bunch crossing. This contribution focuses on the tracking of muons at HLT. As one of the pillars of triggering at CMS, the online muon reconstruction exploits measurements in all sub-detectors, from the inner tracker to the dedicated muon system. The entire reconstruction workflow will be discussed, highlighting the changes put into effect for the Upgrade, with a particular focus on the usage of novel detectors and the possibilities offered by heterogeneous event reconstruction at the CMS HLT.
Speaker: Luca Ferragina (Universita Di Bologna (IT))
-
25
-
15:50
Coffee
-
SessionConvener: David Lange (Princeton University (US))
-
28
Track Reconstruction with the ATLAS Inner Tracker and ACTS at the High-Luminosity LHC
The High-Luminosity LHC (HL-LHC) will significantly increase the demands on charged particle track reconstruction, with proton–proton collisions expected to reach up to 200 simultaneous interactions per bunch crossing and heavy-ion collisions producing similarly high track densities. In preparation for Run 4, the ATLAS experiment is replacing its current Inner Detector with a new all-silicon Inner Tracker (ITk), featuring extended pseudorapidity coverage, increased granularity, and improved radiation hardness. These upgrades will enhance tracking performance, benefiting downstream domains such as flavor tagging, electron and photon reconstruction. To fully exploit the detector capabilities and cope with the challenging running condition expected for HL-LHC operations, ATLAS is modernizing its reconstruction software adopting the experiment-independent ACTS (A Common Tracking Software) toolkit as the core of its tracking algorithms. ACTS integration involves the complete redesign of several elements of the ATLAS reconstruction software. This contribution presents the latest expected tracking performance with the ITk and ACTS, including studies of fake and mis-reconstructed track rates at high pileup, and highlights the most recent results of ACTS-based reconstruction for both proton–proton and heavy-ion physics at the HL-LHC.
Speaker: Noemi Calace (CERN)
-
28
-
Poster lightning talks
-
29
Challenges Deploying a Hybrid PV-finder Algorithm for Primary Vertex Reconstruction in LHCb’s GPU-Resident First Level Trigger
The PV-finder algorithm employs a hybrid deep neural network to reconstruct primary vertex positions (PVs) in proton-proton collisions at the LHC. The algorithm was originally developed for use in LHCb, but it has been adapted successfully for use in the much higher pile-up environment of ATLAS. PV-finder integrates fully connected layers that do track-by-track calculations with a convolutional neural network to predict “target histograms” from which PV positions are extracted using a simple heuristic algorithm. The LHCb version of PV-finder has efficiency greater than 97% with a false positive rate near 0.03 per event. LHCb uses a software-only trigger in Run 3. The first level trigger (Hlt1) has been implemented on GPUs in the Allen software framework. PV-finder was developed using PyTorch, and deploying its inference engine in Allen presents a number of challenges. Allen schedules its threads and has its own memory management. The LibTorch and CuDNN libraries schedule threads themselves and expect to allocate memory, so cannot be used directly in Allen. Instead, a translational layer converts methods from CuDNN to equivalent methods that work inside Allen. The design of the full inference engine deployed in Allen and its performance will be discussed, including the design of the translational layer.
Speaker: Mohamed Elashri (University of Cincinnati) -
30
GNN-based Track Finding for a new multiple TPC detector
The Tagged Deep Inelastic Scattering (TDIS) experiment at Jefferson Lab studies nucleon mesonic content by detecting low-momentum recoil hadrons with a multiple Time Projection Chamber (mTPC) in coincidence with scattered electrons. The expected high rate, high occupancy environment poses significant challenges to traditional track finding algorithms. In this talk, I will present our development of a Graph Neural Network approach to partition detector hits into particle trajectories, which uses azimuthal symmetry and trajectory constraints for accurate edge classification and incorporates a dedicated track clustering algorithm to resolve ambiguity.
Speaker: Shujie Li (Berkeley Lab) -
31
Advances in Low-Energy Particle Track Reconstruction with Interaction Graph Networks at the PANDA experiment.
The success of neural network based tracking algorithms for high energy colliders has prompted us to explore the merits of these methods for tracking in the lower energy regime of the PANDA experiment. In this talk, I will present the current state of a tracking pipeline that has been adapted from the Exa.TrkX group and that has an interaction graph neural network at its core. It has an encoder, decoder structure with message passing steps in between to predict the probability that two detector hits are related to each other. This neural network was then trained and tested on events simulated in the straw tube tracker of the future PANDA experiment. A previous study using this pipeline has already yielded promising results for reconstructing low-momentum tracks and tracks from displaced vertices resulting from
decays. The present work aims at further refining this approach and applying it to an additional, more complex, hyperon channel. First, the structure and performance of the pipeline will be presented using a clean sample containing only events with uniformly distributed (anti-)muons. We then show how the network can be implemented and improved to track the decay products of
hyperons produced in proton antiproton annihilation. This is of particular scientific interest since hyperon processes are a promising probe of CP violation and electromagnetic form factors. However, they are technically challenging to study due to their long lifetimes and sequential decays resulting in multiple displaced vertices and tracks of low-energy pions.Speaker: Nikolai in der Wiesche (Institute of Nuclear Physics, University of Münster) -
32
Machine Learning implementation in Front-End Electronics of Belle II Central Drift Chamber for cross-talk noise reduction
Central Drift Chamber (CDC) in the Belle II experiment is one of the charged tracking device for both offline and real-time hardware trigger systems. Belle II CDC has been using a Front-End Electronics (FEE) device based on Xilinx Virtex-5 FPGA to record the digitized waveform of anode wires and to deliver the data to both central data acquisition system and hardware trigger system. In the operation so far, we observe an issue of background wire hits in the FEE, where bunches of wire hits not caused by charged particles happen in nearby regions. The track finding is based on Hough transformation. Compared to the wire-based offline track finding, the implementation in FPGA for hardware trigger purpose is based on track segment, where hits of multiple wire layers are combined. Due to reduced dimension in track finding, larger mesh size in the conformal plane, and no association to wire drift time and signal waveform, the track trigger is more sensitive to the cross-talk noise, causing a fake trigger rate with a factor of 2 or more. We perform a study of implementing small-scale machine learning in the FPGA of FEE to separate the wire hit signal from background based on the difference of their waveforms. Since the Xilinx Virtex-5 FPGA has less resource than modern series, the challenges are not only the separation power but also reduction on the FPGA resource usage in order to realize the implementation on each of the wire channel within a relatively small FPGA. We will report about the development of the model, FPGA firmware, real deployment and the validation with Belle II system.
Speaker: Yun-Tsung Lai -
33
Graph Neural Network Acceleration on FPGAs for Real-Time Muon Triggering at the HL-LHC
The upcoming High Luminosity phase of the Large Hadron Collider requires significant advancements in real-time data processing to handle the increased event rates and maintain high-efficiency trigger decisions. In this work, we explore the acceleration of graph neural networks on field-programmable gate arrays for fast inference within future muon trigger pipelines with O(100) ns latencies. Graph-based architectures offer a natural way to represent and process detector hits while preserving spatial and topological information, making them particularly suitable for muon reconstruction in a noisy and sparse environment. This work contributes to the broader goal of integrating AI-driven solutions into high-energy physics trigger systems and represents a step forward in enabling hardware-optimised, graph-based inference for real-time event selection in experimental physics.
Speaker: Davide Fiacco (Sapienza Universita e INFN, Roma I (IT)) -
34
Application of Point Cloud Classification to Particle Identification in the SuperFGD neutrino detector
The T2K experiment is a long-baseline neutrino oscillation experiment located in Japan that has major goal to find hints of CP-violation in the leptonic sector. To further improve its sensitivity, it is crucial to precisely measure electron neutrino cross section at the near detector. This measurement has been challenging, as the T2K beam composition is mostly muon neutrinos, requiring a strong background suppression in electron neutrino event selections.
The near detector complex has been equipped with new detectors in 2024 to advance its neutrino cross section measurements. The core of the upgrade is the SuperFGD detector, an active target and a scintillation tracker which consists of 2 million 1cm^3 plastic scintillator cubes.
The study presented in this talk aims to distinguish electromagnetic showers generated from electrons from background events. To fully exploit the potential of the SuperFGD, we develop a point cloud based machine learning technique to analyze the collections of 3D hits from electron candidates. In addition to individual 3D hits, we incorporate macroscopic features that characterize the shower shape, along with complementary information from detectors outside the SuperFGD.Speaker: Hikaru Tanigawa (KEK) -
35
Development and performance of electron reconstruction with Acts and the OpenDataDetector (ODD)
Over the last years, a general purpose track finding algorithm based on the combinatorial Kalman filter (CKF) has been developed for the Acts toolkit - a community-driven project that provides experiment-independent tracking algorithms written in modern C++. It has been validated and optimized with the OpenDataDetector (ODD), and the ATLAS Phase-2 Inner Tracker (ITk). The CKF shows good performance for muons and pions but is inefficienct for electrons due to bremstrahlung. Acts also provides a matured implementation of a Gaussian Sum Filter (GSF) to cope with the non-gaussian energy loss during track fitting.
In this contribution we present efforts to tackle the specific challenges of electron reconstruction in Acts. For track finding, we present an algorithm that uses the CKF mechanism to discover new measurements, but leverages components of the GSF to adapt to the brehmsstrahlung energy loss. For the electron re-fitting, we present the exploration of an ML-based regression as a potential replacement of the computationally expensive multi-component fit by the GSF.
The new algorithms will be compared to the currently available, matured algorithms in ACTS and validated with the ODD using reference physics samples.
Speaker: Andreas Stefl (CERN) -
36
Patatune: A Framework for Multi-Objective Optimization of Track Reconstruction Algorithms
Accurate and efficient track reconstruction is critical for the results of high-energy physics experiments, particularly as upcoming high-luminosity environments become more complex with increased data rates and pile-up. Reconstruction algorithms depend on numerous parameters whose optimal configuration is crucial and non-trivial. Manual tuning in such high-dimensional solution spaces is often not enough.
Here, we present The Optimizer, a flexible software framework that leverages heuristic optimization techniques to automate the tuning of track reconstruction parameters. The tool enables the exploration of a Pareto front of solutions that optimize key performance metrics such as tracking efficiency and fake rate. Designed to be agnostic to the reconstruction software, The Optimizer allows for integration with various pipelines and supports extensions to other optimization techniques.
The framework has also been tested with the standalone open-source version of the CMS pixel track reconstruction software, using both CMS open data and the TrackML simulated dataset. The tests results and the validation with standard benchmark test functions for multi-objective optimization demonstrates the tool's potential to improve the performance of tracking systems under increasingly demanding experimental conditions.
Speaker: Felice Pantaleo (CERN)
-
29
-
Poster session
-
18:00
Reception
-
16
-
-
SessionConvener: Yu Nakahama Higuchi (KEK High Energy Accelerator Research Organization (JP))
-
37
Graph-Based Multi-Modal Track Reconstruction in the Belle II Drift Chamber and Vertex Detectors
Large backgrounds and the degradation of detector gain impact the track finding in the Belle II central drift chamber, reducing both purity and efficiency in events. This necessitates the development of new track finding algorithms to mitigate the reduction in detector performance.
Our implementation of an end-to-end multi-track reconstruction algorithm for the Belle II experiment at the SuperKEKB collider (arXiv:2411.13596) increases the efficiency for displaced tracks from 52.2% to 85.4%. The algorithm has been further extended to incorporate inputs from both the drift chamber and the silicon vertex tracking detector, creating a multi-modal network. We employ graph neural networks to handle the irregular detector structure and object condensation to address the unknown, varying number of particles in each event. This approach simultaneously finds all tracks in an event and determines their respective parameters.
Utilizing a realistic full detector simulation, which includes beam-induced backgrounds and detector noise derived from actual collision data, we report the performance of our track-finding algorithm across various event topologies compared to the existing baseline algorithm used in Belle II.Speaker: Lea Reuter -
38
Exploring the potential of cooperative track building
Efficient and accurate charged particle tracking is one of the most computationally demanding challenges at the High-Luminosity Large Hadron Collider (HL-LHC). Graph neural nets (GNNs) have emerged as one of the more promising solutions as they exploit correlations between nearby hits and tracks rather than treating each track independently. However, tracking GNNs can be slow to train and tune, and achieving a sufficiently low inference latency is challenging. This work explores whether one could combine the advantages of traditional or simple approaches, like Kalman filtering, with the key GNN insight of cooperative track building. We present three complementary approaches: a modified Kalman filter, classical optimization strategies (metaheuristics), and reinforcement learning. Each incorporates cooperative elements and represents a novel way of addressing the tracking problem. While this research is in an early stage, the cooperative paradigms explored here suggest a promising path toward more efficient and scalable track reconstruction.
Speaker: Liv Helen Vage (Princeton University (US)) -
39
Expected physics and computing performance of the ATLAS ITk GNN-based Track Reconstruction Chain
The HL-LHC upgrade of the ATLAS inner detector (ITk) brings an
unprecedented challenge, both in terms of the large number of silicon
cluster readouts and the throughput required for budget-constrained
track reconstruction. Applying Graph Neural Networks (GNNs) has been
shown to be a promising solution to this problem with competitive
physics performance at sub-second inference time.In this contribution, the expected physics performance of the GNN4ITk
track reconstruction chain will be presented, with emphasis on
improvements in efficiency, fake rate, and track parameter resolution
from recent developments in graph segmentation and treatment of track
candidates. Results from first studies on not yet covered topics such as
electron reconstruction and stability against detector defects will be
shown.Apart from that, the presentation will highlight recent improvements on
the computational performance of the pipeline. This includes machine
learning model optimizations with a focus on inference acceleration,
ranging from mixed precision and model reduction to industry-grade
compilation solutions, as well as refinement of the graph-building cuts
that reduce the timing without significant loss in reconstruction
performance. Furthermore, dedicated CUDA kernels to accelerate the
graph-building and the graph-segmentation timings have been implemented
and optimized.Finally, the recent progress in integrating the GNN pipeline into the
ATLAS software infrastructure and first studies on throughput in this
setup environment will be shown.Speaker: Benjamin Huth (CERN)
-
37
-
10:30
Coffee
-
SessionConvener: Salvador Marti I Garcia (IFIC-Valencia (UV/EG-CSIC))
-
40
Uncertainty Quantification in an ML Pattern Recognition Pipeline
Geometric learning pipelines have achieved state-of-the-art performance in High-Energy and Nuclear Physics reconstruction tasks ike flavor tagging and particle tracking [1]. Starting from a point cloud of detector or particle-level measurements, a graph can be built where the measurements are nodes, and where the edges represent all possible physics relationships between the nodes. Depending on the size of the resulting input graph, a filtering stage may be needed to sparsify the graph connections. A Graph Neural Network will then build a latent representation of the input graph that can be used to predict, for example, whether two nodes (measurements) belong to the same particle or to classify a node as noise. The graph may then be partitioned into particle-level subgraphs, and a regression task used to infer the particle properties. Evaluating the uncertainty of the overall pipeline is important to measure and increase the statistical significance of the final result. How do we measure the uncertainty of the predictions of a multistep pattern recognition pipeline? How do we know which step of the pipeline contributes the most to the prediction uncertainty, and how do we distinguish between irreducible uncertainties arising from the aleatoric nature of our input data (detector noise, multiple scattering, etc) and epistemic uncertainties that we could reduce by using, for example, a larger model, or more training data?
We have developed an Uncertainty Quantification process for multistep pipelines to study these questions and applied it to the acorn particle tracking pipeline [2]. All our experiments are made using the TrackML open dataset [3]. Using the Monte Carlo Dropout method, we measure the data and model uncertainties of the pipeline steps, study how they propagate down the pipeline, and how they are impacted by the training dataset's size, the input data's geometry and physical properties. We will show that for our case study, as the training dataset grows, the overall uncertainty becomes dominated by aleatoric uncertainty, indicating that we had sufficient data to train the acorn model we chose to its full potential. We show that the ACORN pipeline yields high confidence in the track reconstruction and does not suffer from the miscalibration of the GNN model.
References:
[1] https://arxiv.org/abs/2203.12852
[2] https://gitlab.cern.ch/gnn4itkteam/acorn
[3] https://sites.google.com/site/trackmlparticle/datasetSpeaker: Lukas Peron (ENS Paris) -
41
Efficient Point Transformer for Charge Particles Track Reconstruction
Charge particle track reconstruction is the foundation of high-energy experiments. Yet, it's also the most computationally expensive part of the particle reconstruction. It's the backbone of many downstream reconstruction algorithms. The innovation in tracking reconstruction with graph neural networks (GNNs) has shown the promising capability to cope with the computing challenges posed by the High-Luminosity LHC (HL-LHC) with Machine learning. However, GNNs face limitations involving irregular computations and random memory access, slowing down their speed. This talk introduces a Locality-Sensitive Hashing-Based Efficient Point Transformer (HEPT) with advanced attention methods as a superior alternative with near-linear complexity, achieving milliseconds latency and memory consumption. We present a comprehensive evaluation of computational efficiency and physics performance for HEPT compared to other algorithms, such as GNN-based pipelines, highlighting its potential to revolutionize the full track reconstruction pipeline.
Speaker: Yuan-Tang Chou (University of Washington (US)) -
42
Transformer for seed reconstruction in ACTS
Reconstructing particle trajectories is a significant challenge in most particle physics experiments and a major consumer of CPU resources. It can typically be divided into three steps: seeding, track finding, and track fitting. Seeding involves identifying potential trajectory candidates, while track finding entails associating detected hits with the corresponding particle. Finally, track fitting focuses on reconstructing the parameters of the trajectory.
Many deep learning-based methods for tracking aim to combine the first two steps into a single process, using a neural network to identify particle trajectories from detector hits. This approach has achieved promising results, nearing the performance of traditional algorithms. However, it still requires a substantial amount of computing power. In classical tracking, most of the intensive computational workload stems from the seed identification process, while the track finding is well understood. Therefore, fully emulating the track-finding process typically performed by a Kalman filter may not be efficient in terms of resources and physics performance.
Instead, we propose utilising a transformer-based network for the seeding step, focusing exclusively on the hits at the centre of the detector. This network will be used to project the hits onto the track parameter space; then a clustering algorithm is used to identify preferred trajectory directions within this space. These preferred directions can then be transformed into seeds for tracking. This process can be completed much faster than conventional seeding, resulting in fewer extraneous seeds and, consequently, a quicker track-finding process.
Afterwards, the seeds are passed to a standard tracking algorithm that can run on either a CPU or GPU to complete the reconstruction process. Our goal with this approach is to combine the speed of deep learning with the reliability of classical tracking techniques. This method will be implemented within the A Common Tracking Software framework and tested on the Open Data Detector to ensure a realistic testing environment.Speaker: Corentin Allaire (IJCLab, Université Paris-Saclay, CNRS/IN2P3) -
43
Tracking in Dense Environments with Transformers
Highly boosted jets represent some of the most challenging tracking conditions in modern collider experiments. At higher momentum, particles become increasing collimated in the jet core, which results in greater track density and a large degree of multiple particles sharing a single cluster, which results in traditional tracking approaches suffering either in terms of either decreased efficiency or increased fake rate. Decreased tracking performance in jets can impact many areas, most notably b-tagging performance, which in turn can limit sensitivity in many downstream analyses.
We present the use of a mask transformer based machine learning model to perform specialised tracking in these dense environments, and are able to increase tracking efficiency by 40% at 1TeV in the jet core. Furthermore, we discuss how auxiliary tasks can be added to simultaneously perform cluster splitting and regress track state on surface parameters, which is simpler and able to leverage more information than existing cluster splitting approaches.Speaker: Max Hart (University College London (GB))
-
40
-
12:50
Lunch
-
SessionConvener: Xiaocong Ai (Zhengzhou University)
-
44
Fast Track Fitting and Anomaly Detection with Machine Learning
Accurate and efficient particle tracking is a crucial component of precise measurements of the Standard Model and searches for new physics. This task consists of two main computational steps: track finding, the identification of a subset of all hits that are due to a single particle; and track fitting, the extraction of crucial parameters such as direction and momenta. Novel solutions to track finding via machine learning have recently been developed. However, track fitting, which traditionally requires searching for the best global solutions across a parameter volume plagued with local minima, has received comparatively little attention.
Here, we propose a novel machine learning solution to track fitting. The per-track optimization task of traditional fitting is transformed into a single learning task optimized in advance to provide constant-time track fitting via direct parameter regression. This approach allows us to optimize directly for the true targets, precise and unbiased estimates of the track parameters. This is in contrast to traditional fitting, which optimizes a proxy based on the distance between the track and the hits. In addition, our approach removes the requirement of making simplifying assumptions about the nature of the noise model. Most crucially, in the simulated setting described here, it provides more precise parameter estimates at a computational cost 100 times smaller. Finally, it provides a natural application for track anomaly detection.
This talk will discuss the motivation, design, and performance of the proposed approach, highlighting comparisons with traditional track fitting methods, the advantages in computational efficiency and precision, and the broader implications for real-time applications and anomaly detection in experiments.
Speaker: Makayla Vessella 🐏 (University of California Irvine (US)) -
45
Charged Particle Tracking with Reinforcement Learning for Drift Chambers
Charged particle tracking is a critical task in high-energy physics. In this work, we propose using reinforcement learning (RL) to the reconstruction of charged particle trajectories in drift chambers. By framing the tracking problem as a sequential decision-making process constrained by physical interactions, RL enables the development of more efficient and adaptive tracking algorithms. This approach paves the way for further advancements in RL-based tracking, offering improved performance and flexibility in optimizing end-to-end tracking algorithms across various detector geometries and conditions.
Speaker: Yao Zhang -
46
GNN Track Reconstruction of Generalized Non-Helical Signatures
Standard tracking pipelines are only capable of finding particles with helical trajectories, yet many theories of new physics predict particles with non-helical trajectories. Graph neural network based trackers have recently been shown to be able to find non-helical tracks when trained on specific examples, such as quirks. But unanticipated new physics may feature unexpected trajectories, which would require a model-agnostic track finder. This would require no predictions from theory and may present an opportunity to make background free single event discoveries. We define a generalized class of smooth physical tracks and train the GNN4ITK pipeline to reconstruct a broad class of tracks. We further study the pipeline’s ability to generalize its training to finding smooth tracks generated by a different process from those that the network was trained to reconstruct. Our findings demonstrate a remarkable ability of the network to function as a generalized track finder with high performance.
Speaker: Levi Condren (University of California Irvine (US))
-
44
-
15:50
Coffee
-
SessionConvener: Alberto Annovi (INFN Sezione di Pisa)
-
47
New Approaches of End-to-end GNN Track Reconstruction Based on Spacepoint Doublet Embedding and Double Metric Learning for Building Directed Graphs with Chain Connections for the ATLAS ITk Detector
Graph construction is an essential step in the Graph Neural Network (GNN) based tracking pipelines. The goal of the graph construction is to construct a graph that contains only true edge connections between nodes (detector spacepoints). A promising approach for the graph construction is through the metric learning, where a node embedding space is learned, and nodes are connected according to their distance in the embedding space. The loss function for the metric learning in this case is a contrastive loss encouraging the true pairs of nodes to be close to each other, and pulling away the false pairs of nodes. This loss function presents a conflict for the hopping connections when the true connection is defined as the chain connection in a particle track. To address the conflict for this case, we propose to learn two node embedding spaces. A directed graph can then be constructed based on the distance between a source node in the first embedding space and a target node in the second embedding space. We test this idea with the ATLAS ITk detector at the HL-LHC using the ATLAS ITk simulation and show better graph construction efficiency and purity compared to the single metric learning graph construction.
Once the graphs are constructed, one can learn a GNN to either classify edges into true and fake edges (edge classification), or learn a node embedding space for node clustering (object condensation). A common problem with these approaches is that it does not handle the situation where a spacepoint is shared by multiple particle tracks. In this presentation, we propose a GNN model that learns an embedding space for the spacepoint doublet. A clustering can then be performed in the spacepoint doublet embedding space to extract track candidates. Alternatively, combining with the edge classification approach, the spacepoint doublet embedding can be used to resolve connected components formed by multiple particle tracks sharing the same spacepoints. We take the ATLAS ITk detector at the HL-LHC as a realistic example and show promising tracking performance with the ATLAS ITk simulation. We also show that we are able to assign shared spacepoints to multiple track candidates with the learning of edge embedding space.
Speaker: Jay Chan (Lawrence Berkeley National Lab. (US)) -
48
LHCb Tracking Reconstruction and Ghost Rejection at 30 MHz
The new fully software-based trigger of the LHCb experiment operates at a 30 MHz data rate and imposes tight constraints on GPU execution time. Tracking reconstruction algorithms in this first-level trigger must efficiently select detector hits, group them, build tracklets, account for the LHCb magnetic field, extrapolate and fit trajectories, and select the best track candidates to filter events that reduces the 4 TB/s data rate by a factor of 30. Optimized algorithms have been developed with this aim. One of the main challenges is the reduction of “ghost” tracks—fake combinations arising from detector noise or reconstruction ambiguities. A dedicated neural network architecture, designed to operate at the high LHC data rate, has been developed, achieving ghost rates below 20%. The techniques used in this work can be adapted for the reconstruction of other detector objects or for tracking reconstruction in other LHC experiments.
Speaker: Da Yu Tou (Tsinghua University (CN)) -
49
Application of ACTS to the CEPC Reference Detector
The Circular Electron Positron Collider (CEPC) is a proposed collider designed to investigate Higgs boson properties with precision and explore new physics. Its Technical Design Report (TDR) is currently under development, focusing on a reference detector. The tracking detector system of this reference detecor includes a vertex detector (VTX), an inner silicon tracker (ITK), a time projection chamber (TPC), and an outer silicon tracker (OTK).
The software developed for the reference detector study, known as CEPCSW, is built on several foundational high-energy physics software packages, including the Gaudi framework for event processing, DD4hep for detector description, and EDM4hep for the event data model. One of the important R&D activities for the reference detector TDR is the application of Common Tracking Software (ACTS).
This contribution introduces a new implementation of tracking using the ACTS library within the CEPCSW software environment. The ACTS-based tracking process begins with seed finding in the VTX and subsequently extrapolates the identified seeds through the ITK, TPC, and finally the OTK, utilizing the ACTS CKF method. Key steps in this implementation include integrating ACTS with the Gaudi framework, converting geometries, and mapping materials between CEPCSW and ACTS. Both physics and computational performances will be presented, along with a discussion of the challenges faced when applying the CKF method to low-momentum particles and tracking within the TPC.
Speaker: Yizhou Zhang (Institute of High Energy Physics)
-
47
-
-
-
SessionConvener: Kunihiro Nagano (KEK High Energy Accelerator Research Organization (JP))
-
50
Antihydrogen annihilation reconstruction in the ALPHA apparatus at CERN
The ALPHA collaboration at CERN operates two machines dedicated to testing fundamental symmetries using trapped antihydrogen atoms. The ALPHA-2 experiment was built in 2012 and is optimized to perform high-precision spectroscopy on antihydrogen [1]. The ALPHA-g apparatus, completed in 2021, is designed to measure its gravitational acceleration [2]. In both instances, the physics signal is derived from the three-dimensional position (plus time) of the antihydrogen annihilation, called the vertex. The ALPHA-2 trap is surrounded by a microstrip silicon detector, while ALPHA-g by a time projection chamber. Both position-sensitive detectors are used to reconstruct the trajectories of charged pions produced in the antiproton annihilation process. This talk explores the tracking methods, and the techniques employed in the ALPHA experiment to reconstruct the annihilation vertex.
[1] Baker, C. J., et al. "Precision spectroscopy of the hyperfine components of the 1S–2S transition in antihydrogen." Nature Physics (2025): 1-7.
[2] Anderson, E. K., et al. "Observation of the effect of gravity on the motion of antimatter." Nature 621.7980 (2023): 716-722.Speaker: Andrea Capra (TRIUMF (CA)) -
51
Tracking Cosmic-Ray Nuclei with the RadMap Telescope
The RadMap Telescope is a compact radiation monitor that can characterize the radiation environment aboard spacecraft and determine the biologically relevant dose received by astronauts. Its main sensor is a tracking calorimeter made from 1024 scintillating-plastic fibers of alternating orientation read out by silicon photomultipliers. It allows the three-dimensional tracking and identification of cosmic-ray nuclei by measurement of their energy-deposition profiles. A first prototype was deployed to the International Space Station (ISS) between April 2023 and January 2024 for an in-orbit demonstration of the instrument’s capabilities.
In this contribution, we will give an overview of the current status of the event-by-event track reconstruction. We describe the neural-network-based analysis framework that we trained and evaluated on simulated data and demonstrate that the expected performance of the instrument is in agreement with the requirements of radiation monitoring. In addition, we present the on-going analysis of the data collected on the ISS, discussing the track-based selection of reconstructable events from the raw detector data and showing first results for the direction-dependent particle flux.Speaker: Luise Meyer-Hetling (Technical University of Munich) -
52
Ultra-displaced dimuon vertexing for LLP signatures in ATLAS
Many beyond the Standard Model (BSM) theories, such as hidden sector models, heavy neutral leptons, and neutral naturalness frameworks, predict invisible long-lived particles that travel macroscopic distances before decaying into visible Standard Model particles with a common spatial origin. Fast and accurate reconstruction of secondary vertices therefore plays a central role in ATLAS LLP searches. While significant advancements have been made in efficiently reconstructing Inner Detector tracks originating far from the primary Interaction Point (IP), traditional secondary vertexing algorithms still remain limited by the ID track reconstruction acceptance, significantly reducing sensitivity to LLP phase space where the bulk of expected decays lie beyond the Pixel detector. To address this limitation, we introduce a novel secondary vertexing technique that leverages StandAlone muon tracks reconstructed exclusively in the Muon Spectrometer, which demonstrates the ability to efficiently reconstruct ultra-displaced dimuon vertices up to 8 meters from the IP. This approach provides sensitivity to LLP decays occurring well beyond the Inner Detector, unlocking previously inaccessible phase-space for ATLAS LLP searches. In this presentation, we will discuss the implementation of the algorithm and performance of this technique on simulated LLP signatures, demonstrate its robustness against backgrounds, and explore its potential impact on future LLP searches in ATLAS.
Speaker: Makayla Vessella 🐏 (University of California Irvine (US)) -
53
Direct reconstruction of charged heavy flavour at LHCb
For Run 3 of the LHC, the LHCb experiment has introduced a novel reconstruction algorithm targeting the direct reconstruction of charged heavy-flavour particles using hits in the VELO sub-detector, located closest to the beamline. This technique is designed to enhance signal purity in challenging analyses that involve missing energy and vertex information, particularly those with neutrinos and tau leptons in the final state. One of the things it will provide is a more precise estimate of the heavy-flavour particle's initial flight direction. This contribution will present the algorithm's design and implementation, show its performance on both real and simulated Run 3 data, and highlight its impact on key physics analyses.
Speaker: Maarten Van Veghel (Nikhef National institute for subatomic physics (NL))
-
50
-
10:40
Coffee
-
SessionConvener: Tsunayuki Matsubara (KEK High Energy Accelerator Research Organization (JP))
-
54
CNN-based event separation using 2D and 3D images for the charged-current electron neutrino cross-section measurement with the SuperFGD of the T2K near detector
The T2K experiment is a long-baseline neutrino oscillation experiment, primarily aiming to search for CP violation in the neutrino sector from the precision measurement of neutrino and antineutrino oscillations. The neutrino beam is generated at J-PARC and is measured at Super-Kamiokande, and it is also measured at near detectors to reduce systematic uncertainties. Currently, the cross-section of electron neutrino is estimated from that of muon neutrino, and its uncertainty is one of the causes of systematic uncertainties in the oscillation analysis. To reduce the uncertainty, the cross-section is being directly measured at ND280, one of the near detectors.
The Super Fine-Grained Detector (SuperFGD) was installed in ND280 in October 2023 to reduce systematic errors in the measurements. The flux of electron neutrino at ND280 is relatively small compared to that of muon neutrino, and background contamination is critical for the selection of electron neutrino interaction events. Currently, 3D tracks at SuperFGD are reconstructed, and the background is rejected by using boosted decision trees with the reconstructed tracks. However, there is still some misidentification, and increasing the selection accuracy is necessary.
The image classification method using convolutional neural networks (CNN) can be applied to distinguish the signal events from the backgrounds. It is especially effective when the event has a complicated topology of an electromagnetic shower. Using 2D raw data from SuperFGD before the reconstruction in addition to the 3D reconstructed image as input can further improve performance. This talk introduces the method using CNN to separate electron neutrino charged-current interaction events from background events.Speaker: Tomochika Arai (University of Tokyo (JP)) -
55
Improving positron tracking using machine learning in the MEG II experiment
In the MEG II experiment, which searches for $\mu\to e\gamma$, a cylindrical drift chamber measures positrons from muon decays. A key challenge arises from the declining positron reconstruction efficiency in the high-pileup environment, primarily due to algorithm limitations. To address this, a machine learning-based noise filtering technique has been developed. This presentation introduces the ML model architecture and its application, followed by a discussion on improvements in tracking performance.
Speaker: Atsushi Oya (The university of Tokyo) -
56
Track direction identification via track-fitting quality for cosmic-ray background suppression in the COMET experiment
The COMET experiment at J-PARC aims to search for the charged lepton flavor violating process of muon-to-electron conversion with unprecedented sensitivity. One of the most serious backgrounds originates from cosmic-ray muons. In particular, a track produced by a backward-going positive muon can mimic the 105 MeV/c signal electron in a cylindrical drift chamber. To address this, we developed a method to identify the track direction based on track-fitting quality metrics using the GENFIT framework. This approach has demonstrated a reduction of the positive muon background by an order of magnitude. In this presentation, we will report on an improved study incorporating machine learning techniques to enhance the performance of track direction identification.
Speaker: Manabu Moritsu (Kyushu University)
-
54
-
12:20
Lunch
-
14:00
Excursion (optional) Bus meeting point
Bus meeting point
https://maps.app.goo.gl/S2wkavHfSwkorC8fAThis tour is optional and separate payment is needed.
-
18:00
Banquet Azumabashi Pier
Azumabashi Pier
https://maps.app.goo.gl/jFYqL7SHXWFBpThaA
-
-
-
SessionConvener: Shima Shimizu (KEK High Energy Accelerator Research Organization (JP))
-
57
Real time learning on heterogeneous devices for detector calibration Remote
Remote
Modern beam telescopes play a crucial role in high-energy physics experiments to precisely track particle interactions. Accurate alignment of detector elements in real-time is essential to maintain the integrity of reconstructed particle trajectories, especially in high-rate environments like the ATLAS experiment at the Large Hadron Collider (LHC). Any misalignment in the detector geometry can introduce systematic biases and potentially affect the accuracy of precision physics measurements. Current calibration systems that correct for these effects require substantial computational resources and these methods often lead to high operational costs and are often unable to handle rapidly changing conditions, leading to systematic inaccuracies and potential biases in physics measurements.
To address these challenges, we propose a calibration system that employs a lightweight neural network to predict the misalignment of the detectors in real time. Our approach utilizes multilayer perceptron (MLP) with hierarchical subset solver for deployment on heterogeneous computing platforms. The neural network predicts detector misalignments based on the detectors current positional data and statistical characteristics of particle trajectories.
This approach leverages ML to predict control parameters in real-time, allowing adaptation to complex nonlinear behaviors. However, this introduces a significant computational workload, as the optimization process involves frequent and dense matrix multiplications for gradient-based updates, which makes efficient hardware acceleration essential. By partitioning the application based on its computational characteristics: leveraging CPUs for sequential tasks, FPGAs for parallel workloads, and AI engine cores for fast and energy-efficient compute, we can achieve a balance of performance and cost. Deploying the algorithm on a heterogeneous computing device that leverages state-of-the-art ML-focused silicon processors could achieve cost-efficient implementation of real-time detector geometry calibration with real-time latency. This work is a step towards AI-driven real-time compute for future high-energy physics experiments on a Versal ACAP architecture, offering significant improvements in computational speed, resource utilization, and cost per watt.
Speaker: Akshay Malige (Brookhaven National Laboratory (US)) -
58
The OpenDataDetector High-Luminosity Physics Benchmark Dataset
We present the first release of a large-scale, fully simulated benchmark dataset using the OpenDataDetector (ODD) under high-luminosity collider conditions (aka ColliderML). The ODD integrates several advanced next-generation detector technologies to realistically capture the complexity of collisions expected at future collider experiments, notably at the High-Luminosity Large Hadron Collider (HL-LHC) and the Future Circular Collider (FCC). This dataset comprises O(1 million) high-pileup collision events, realistically simulated and digitized, covering O(10) important Standard Model (SM) and Beyond Standard Model (BSM) physics channels. Additionally, a similar dataset of single-particle samples are included for track fitting and calorimeter response studies. The object content includes detailed energy depositions from tracker and calorimeter sensors, as well as reconstructed physics objects such as particle tracks and jets.
In this talk, we present the development of the simulation pipeline, using tooling from the ACTS and Key4HEP projects, and the digitization procedure, which uses best-practices borrowed from experimental collaborations. We present studies of reconstruction performance, in particular using ACTS track finding, fitting and vertexing, which forms a baseline and benchmark to compare alternative reconstruction approaches against. To facilitate widespread adoption, the ColliderML dataset is accompanied by an intuitive software library designed for efficient data access and processing.
Speaker: Paul Gessinger (CERN) -
59
Sustainability studies of big data processing in real time for HEP
The LHCb collaboration is currently using a pioneer system of data filtering in the trigger system, based on real-time particle reconstruction using Graphics Processing Units (GPUs). This corresponds to processing 40 Tbits/s of data and has required a huge amount of hardware and software developments. Among them, the corresponding power consumption and sustainability is an imperative matter in view of the next high luminosity era for the LHC collider, which will largely increase the output data rate. In the context of the High-Low project at IFIC in Valencia, and using tracking reconstruction algorithms, several studies have been performed to understand how to optimize the energy usage in terms of the computing architectures and the efficiency of the algorithms which are running on them. In addition, a strategy is designed to evaluate the potential impact of quantum computing for tracking reconstruction, as it begins to enter in the field.
Speaker: Arantza De Oyanguren Campos (Univ. of Valencia and CSIC (ES))
-
57
-
10:10
Coffee break
-
SessionConvener: Noemi Calace (CERN)
-
60
Performance of 4D tracking and vertexing with ACTS
Four-dimensional trackers are devices capable of simultaneously measuring spatial and temporal coordinates with extremely good resolution (O(10 μm) and O(10 ps)) and represent a promising avenue for charged-particle tracking. The ACTS library has the capability to include time information in track and vertexing reconstruction algorithms thanks to its 6-parameter track representation. In this work we present the physics and computational performance of 4-dimensional tracking and vertexing algorithms using the Open Data Detector as well as their application into more complex object reconstruction algorithms such as particle identification or heavy flavour jet tagging.
Speaker: Pierfrancesco Butti (CERN) -
61
Hypergraph Neural Network 4D track reconstruction pipeline for HL-LHC experiments
Particle track reconstruction is one of the most important and challenging tasks to be performed in the high luminosity phase of LHC experiments. Extensive research is being done to develop reconstruction methods that provide the same efficiency as the current adaptive filter methods but with enough throughput to suit the increase in the event information density of this new environment. A promising alternative is via the Geometric Deep Learning (GDL) framework, in which most successful approaches use edge-classifying Graph Neural Networks (GNNs) to classify segments between hits as belonging to a particle track or not. In this work, we explore the Hypergraph Neural Network (HGNN) tracking pipeline, where hyperedges represent a whole track candidate, no longer just a segment. This change in design allows modeling relationships that go beyond pairwise connections between hits. The hypergraph convolution operator considers all hits in the multiple trajectory that the node is connected to, significantly increasing the message passing range compared to the usual graph convolution. Our goal is to adapt this method to perform 4D tracking for detectors with timing layers, like the ones proposed by ATLAS and CMS for the HL-LHC phase. Using the ACTS (A Common Tracking Software) framework, we implemented a custom algorithm that follows the seeding step and builds a hypergraph with track candidates. Next, the HGNN scores the candidates for their likelihood to be a true track. This pipeline was evaluated with $pp\rightarrow t\bar{t}$ events with pileup $\langle \mu \rangle$ = 200 (interactions per bunch crossing) on the Open Data Detector (ODD) and it shows high efficiency (94%) within the acceptance region defined by |$\eta$| < 3 and $p_T$ > 1 GeV. Efforts to optimize node and hyperedge feature space and to improve the network architecture are underway to make better usage of the new design choice, with the expectation to increase even more its performance. We acknowledge support from FAPESP (2023/18484-6, 2022/14150-3, and 2020/04867-2) and MCTI/CNPq (INCT CERN Brasil 406672/2022-9). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001.
Speaker: Rodrigo Estevam De Paula (Universidade de Sao Paulo (USP) (BR)) -
62
ACTS-based 4D tracking studies
At the future colliders such as High-Luminosity LHC (HL-LHC), the average number of the simultaneous pp interactions per event, or pile-up (µ), will rise from current 30-60 to as much as 200. Among the full event simulation and reconstruction chains, reconstruction of charged particles quickly becomes the most computationally intensive chain because it scales combinatorially with an increasing number of charged particles. Resolving the ambiguity using time measurement has already been investigated during LHC Phase-II Upgrade, e.g. the ATLAS High Granularity Timing Detector (HGTD), which will be placed outside the ATLAS Phase-II Inner Tracker (ITk) endcap to remove the pile-up vertex in the forward region of 2.4< |eta| < 4. Meanwhile, the possibility of replacing the inner layers of the ATLAS Phase-II barrel pixel detector with 4D pixel detector during a possible Phase-III upgrade is foreseen as well.
Based on the common tracking software, ACTS, where the particle flight time is integrated in the track parameterization inherently, the gain of tracking performance by including time measurement in low-level cluster, seeding and track following has been explored on top of dedicated 4D digitization. In this contribution, we will present the implementation of the ACTS-based 4D tracking chain for both the Open Data Detector and ATLAS ITk. The improvement of both physics performance, e.g. efficiency and resolution, and computational performance, with 4D measurement compared to traditional 3D measurement will be discussed.
Speaker: Yanqi Wang (University of Science and Technology of China (CN))
-
60
-
63
Closing
-