Recent work has demonstrated that graph neural networks (GNNs) can match the performance of traditional algorithms for charged particle tracking while improving scalability to meet the computing challenges posed by the HL-LHC. Most GNN tracking algorithms are based on edge classification and identify tracks as connected components from an initial graph containing spurious connections. In this...
ML-based track finding algorithms have emerged as competitive alternatives to traditional track reconstruction methods. However, a major challenge lies in simultaneously finding and fitting tracks within a single pass. These two tasks often require different architectures and loss functions, leading to potential misalignment. Consequently, achieving stable convergence becomes challenging when...
Recent studies have shown promising results for track finding in dense environments using Graph Neural Network (GNN)-based algorithms. These algorithms not only provide high track efficiency but also offer reasonable track resolutions. However, GNN-based track finding is computationally slow on CPUs, necessitating the use of coprocessors like GPUs to accelerate the inference time....
Hybrid pixel detectors like Timepix3 and Timepix4 detect individual pixels hit by particles. For further analysis, individual hits from such sensors need to be grouped into spatially and temporally coinciding groups called clusters. While state-of-the-art Timepix3 detectors generate up to 80 Mio hits per second, the next generation, Timepix4, will provide data rates of up to 640 Mio hits (data...
To prepare for the High Luminosity phase of the Large Hadron Collider at CERN (HL-LHC), the ATLAS experiment is replacing its innermost components with a full-silicon tracker (ITk), to improve the spatial resolution of the tracks measurements and increase the data readout rate. However, this upgrade alone will not be sufficient to cope with the tremendous increase of luminosity, and...
The Circular Electron Positron Collider (CEPC) is a physics program proposal with the goal of providing high-accuracy measurements of properties of the Higgs, W and Z bosons, and exploring new physics beyond the SM (BSM). The CEPC is also an excellent facility to perform precise tests of the theory of the strong interaction.
To deliver those physics programs, the CEPC detector concepts must...
CLUE is a fast and innovative density-based clustering algorithm to group digitized energy deposits (hits) left by a particle traversing the active sensors of a high-granularity calorimeter in clusters with a well-defined seed hit. It was developed in the context of the new high granularity sampling calorimeter (HGCAL) which will be installed in the forward region of the Compact Muon Solenoid...
With its increased number of proton-proton collisions per bunch crossing, track reconstruction at the High-Luminosity Large Hadron Collider (HL-LHC) is a complex endeavor. The Inner Tracker (ITk) is a silicon-only replacement of the current ATLAS Inner Detector as part of its Phase-II upgrade.
It is specifically designed to handle the challenging conditions at the HL-LHC, resulting from...
MkFit is a Kalman filter-based track reconstruction algorithm that uses both thread- and data-level parallelism. It has been deployed in the Run-3 offline workflow of the CMS experiment. The CMS tracking performs a series of iterations to reconstruct tracks of increasing difficulty. MkFit has been adopted for several of these iterations, which contribute to the majority of reconstructed...
We present an end-to-end particle-flow reconstruction algorithm for highly granular calorimeters. Starting from calorimeter hits and reconstructed tracks the algorithm filters noise, separates showers, regresses their energy, provides an energy uncertainty estimate, and predicts the type of particle. The algorithm is trained on data from a simulated detector that matches the complexity of the...
In view of the HL-LHC, the Phase-2 CMS upgrade will replace the entire trigger and data acquisition system. The detector readout electronics will be upgraded to allow a maximum L1 accept rate of 750 kHz, and a latency of 12.5 µs. The muon trigger is a multi-layer system that is designed to reconstruct muon stubs on each muon station and then to measure the momenta of the muon by correlating...
LHCb is optimised to study particles decaying a few millimetres from the primary vertex using tracks that traverse the length of the detector. Recently, extensive efforts have been undertaken to enable the study of long-lived particles decaying within the magnet region, up to 7.5 m from the interaction point. This approach presents several challenges, particularly when considering real-time...
Dense hadronic environments encountered, for example, in the core of high-transverse-momentum jets, present specific challenges for the reconstruction of charged-particle trajectories (tracks) in the ATLAS tracking detectors, as they are characterised by a high density of ionising particles. The charge clusters left by these particles in the silicon sensors are more likely to merge with...
Over the next decade, increases in instantaneous luminosity and detector granularity will increase the amount of data that has to be analyzed by high-energy physics experiments, whether in real time or offline, by an order of magnitude. The reconstruction of charged
particles, which has always been a crucial element of offline data processing pipelines, must increasingly be deployed from the...
In particle physics experiments, hybrid pixel detectors are an integral part of the tracking systems closest to the interaction points. Utilising excellent spatial resolution and high radiation resilience, they are used for particle tracking via the “connecting the dots” method seen in layers of an onion-like structure. In the context of the Medipix Collaborations, a novel, complimentary...
The Exa.TrkX Graph Neural Network (GNN) for reconstruction of liquid argon time projection chamber (LArTPC) data is a message-passing attention network over a heterogeneous graph structure, with separate subgraphs of 2D nodes (hits in each plane) connected across planes via 3D nodes (space points). The model provides a consistent description of the neutrino interaction across all...
The upcoming High Luminosity phase of the Large Hadron Collider (HL-LHC) represents a steep increase in pileup rate ($\left\langle\mu \right\rangle = 200$) and computing resources for offline reconstruction of the ATLAS Inner Tracker (ITk), for which graph neural networks (GNNs) have been demonstrated as a promising solution. The GNN4ITk pipeline has successfully employed a GNN architecture...
The ATLAS Run3 will conclude as planned in late 2025 and will be followed by the so-called Long Shutdown 3. During this period all the activities exclusively dedicated to Run4 will converge on the closing of the prototyping development and in the start of the production and integration, to reach the data collection in 2029. These upgrades are principally led by the increase of the peak of...
The High-Luminosity LHC shall be able to provide a maximum peak luminosity of 5 × $10^{34}$ cm$^{−2}$s$^{−1}$, corresponding to an average of 140 simultaneous p-p interactions per bunch crossing (pile-up), at the start of Run 4, around 2028. The ATLAS experiment will go through major changes to adapt to the high-luminosity environment, in particular in the DAQ architecture and in the trigger...
Dense hadronic environments encountered, for example, in the core of high-transverse-momentum jets, present specific challenges for the reconstruction of charged-particle trajectories (tracks) in the ATLAS tracking detectors, as they are characterised by a high density of ionising particles. The charge clusters left by these particles in the silicon sensors are more likely to merge with...
The ATLAS Run3 will conclude as planned in late 2025 and will be followed by the so-called Long Shutdown 3. During this period all the activities exclusively dedicated to Run4 will converge on the closing of the prototyping development and in the start of the production and integration, to reach the data collection in 2029. These upgrades are principally led by the increase of the peak of...
Recent studies have shown promising results for track finding in dense environments using Graph Neural Network (GNN)-based algorithms. These algorithms not only provide high track efficiency but also offer reasonable track resolutions. However, GNN-based track finding is computationally slow on CPUs, necessitating the use of coprocessors like GPUs to accelerate the inference time....
The upcoming High Luminosity phase of the Large Hadron Collider (HL-LHC) represents a steep increase in pileup rate ($\left\langle\mu \right\rangle = 200$) and computing resources for offline reconstruction of the ATLAS Inner Tracker (ITk), for which graph neural networks (GNNs) have been demonstrated as a promising solution. The GNN4ITk pipeline has successfully employed a GNN architecture...
Hybrid pixel detectors like Timepix3 and Timepix4 detect individual pixels hit by particles. For further analysis, individual hits from such sensors need to be grouped into spatially and temporally coinciding groups called clusters. While state-of-the-art Timepix3 detectors generate up to 80 Mio hits per second, the next generation, Timepix4, will provide data rates of up to 640 Mio hits (data...
The High-Luminosity LHC shall be able to provide a maximum peak luminosity of 5 × $10^{34}$ cm$^{−2}$s$^{−1}$, corresponding to an average of 140 simultaneous p-p interactions per bunch crossing (pile-up), at the start of Run 4, around 2028. The ATLAS experiment will go through major changes to adapt to the high-luminosity environment, in particular in the DAQ architecture and in the trigger...
LHCb is optimised to study particles decaying a few millimetres from the primary vertex using tracks that traverse the length of the detector. Recently, extensive efforts have been undertaken to enable the study of long-lived particles decaying within the magnet region, up to 7.5 m from the interaction point. This approach presents several challenges, particularly when considering real-time...
To prepare for the High Luminosity phase of the Large Hadron Collider at CERN (HL-LHC), the ATLAS experiment is replacing its innermost components with a full-silicon tracker (ITk), to improve the spatial resolution of the tracks measurements and increase the data readout rate. However, this upgrade alone will not be sufficient to cope with the tremendous increase of luminosity, and...
In view of the HL-LHC, the Phase-2 CMS upgrade will replace the entire trigger and data acquisition system. The detector readout electronics will be upgraded to allow a maximum L1 accept rate of 750 kHz, and a latency of 12.5 µs. The muon trigger is a multi-layer system that is designed to reconstruct muon stubs on each muon station and then to measure the momenta of the muon by correlating...
Since 2022, the LHCb detector is taking data with a full software trigger at the LHC proton proton collision rate, implemented in GPUs in the first stage and CPUs in the second stage. This setup allows to perform the alignment & calibration online and to perform physics analyses directly on the output of the online reconstruction, following the real-time analysis paradigm.
This talk will...
Long-lived particles (LLPs) are present in the SM and in many new physics scenarios beyond it but they are very challenging to reconstruct at LHC due to their very displaced vertices. A new algorithm, called "Downstream", has been developed at LHCb which is able to reconstruct and select LLPs in real time at the first level of the trigger (HLT1). It is executed on GPUs inside the Allen...
The performance of the Inner Detector tracking trigger of the ATLAS experiment at
the Large Hadron Colloder (LHC) is evaluated for the data taken for LHC Run-3 during 2022.
Included are results from the evolved standard trigger track reconstruction, and from new
unconventional tracking strategies used in the trigger for the first time in Run-3.
From Run-3, the application of Inner...
Kalman Filter (KF)-based tracking algorithms are used by many collider experiments to reconstruct charged-particle trajectories with great performance. The input to such algorithms are usually point estimates of a particle's crossing on a detector's sensitive elements, known as measurements. For instance, in a pixel detector, connected component analysis is typically used to yield...
Over the last years, the ACTS software has matured in functionality and performance while at the same time the Open Data Detector (ODD), a revision and evolution of the TrackML detector, has been established. Together they form a foundation for algorithmic research and performance evaluation also for detectors with time measurements, like the ODD. In this contribution we present the...
The application of deep learning models in particle tracking is pervasive. Graph Neural Networks are applied in track finding, Deep learning models in resolving merged tracks, Transformers in jet flavor tagging, and GravNet or its variations in one-short track finding. The current practice is to design one deep learning model for one task. However, these tasks are so deeply intertwined that...
The Mu2e experiment plans to search for neutrinoless muon to electron conversion in the field of a nucleus. Such a process violates lepton flavor conservation. To perform this search, a muon beam is focused on an aluminum target, the muons are stopped in the field of the aluminum nucleus, and electrons emitted from subsequent muon decays in orbit are measured. The endpoint energy for this...
Seed finding is an important and computationally expensive problem in the reconstruction of charged particle tracks; finding solutions to this problem involves forming triples (seeds) of discrete points at which particles were detected (spacepoints) in the detector volume. This combinatorial process scales cubically with the number of spacepoints, which in turn is expected to increase in...
For the tracker systems used in experiments like the large LHC experiments, a track based alignment with offline software is performed. The standard approach involves minimising the residuals between the measured and track-predicted hits using the $\chi^2$ method. However, this minimisation process involves solving a complex and computationally expensive linearised matrix equation. A new...
Particle physics experiments often require the simultaneous reconstruction of many interaction vertices. This task is complicated by track reconstruction errors which frequently are bigger than the typical vertex-vertex distances in physics problems. Usually, the vertex finding problem is solved by ad hoc heuristic algorithms. We propose a universal approach to address the multiple vertex...
Applying graph-based techniques, and graph neural networks (GNNs) in particular, has been shown to be a promising solution [1-3] to the high-occupancy track reconstruction problems posed by the upcoming HL-LHC era. Simulations of this environment present noisy, heterogeneous and ambiguous data, which previous GNN-based algorithms for ATLAS ITk track reconstruction could not handle natively. We...
Detailed knowledge of the radiation environment in space is an indispensable prerequisite of any space mission in low Earth orbit or beyond. The RadMap Telescope is a compact multi-purpose radiation detector that provides near real-time monitoring of the radiation aboard crewed and uncrewed spacecrafts. A first prototype is currently deployed on the International Space Station for an in-orbit...
FASER, the ForwArd Search ExpeRiment, is an LHC experiment located 480 m downstream of the ATLAS interaction point along the beam collision axis. FASER is designed to detect TeV-energy neutrinos and search for new light weakly-interacting particles produced in the pp collision at the LHC. FASER has been taking collision data since the start of LHC Run3 in July 2022. The first physics results...
The SuperKEKB accelerator and the Belle II experiment constitute the second-generation asymmetric energy B-factory. SuperKEKB has recently set a new world record in instantaneous luminosity, which is anticipated to further increase during the upcoming run periods up to $6 \times 10^{35} cm^{-2}s^{-1}$. An increase in luminosity is challenging for the track finding as it comes at the cost of a...
In this work, we present a study on ways that tracking algorithms can be improved with machine learning (ML). We base this study on a line-segment-based tracking (LST) algorithm that we have designed to be naturally parallelized and vectorized in order to efficiently run on modern processors. LST has been developed specifically for the Compact Muon Solenoid (CMS) Experiment at the LHC, towards...
Track reconstruction is a vital aspect of High-Energy Physics (HEP) and plays a critical role in major experiments. In this study, we delve into unexplored avenues for particle track reconstruction and hit clustering. Firstly, we enhance the algorithmic design by utilizing a "simplified simulator" (REDVID) to generate training data that is specifically designed for simplicity. We demonstrate...
The application of graph neural networks (GNN) in track reconstruction is a promising approach to cope with the challenges that will come with the HL-LHC. They show both good track-finding performance in high pile-up scenarios and are naturally parallelizable on heterogeneous compute architectures.
Typical HEP detectors have have a high resolution in the innermost layers in order to support...
Graph Neural Network (GNN) models proved to perform well on the particle track finding problem, where traditional algorithms become computationally complex as the number of particles increases, limiting the overall performance. GNNs can capture complex relationships in event data represented as graphs. However, training on large graphs is challenging due to computation and GPU memory...
Accurate track reconstruction is essential for high sensitivity to beyond Standard Model (BSM) signatures. However, many BSM particles undergo interactions that produce non-helical trajectories, which are difficult to incorporate into traditional tracking techniques. One such signature is produced by "quirks", pairs of particles bound by a new, long-range confining force with a confinement...
At the High Luminosity phase of the LHC (HL-LHC), experiments will be exposed to numerous (approx. 140) simultaneous proton-proton collisions. To cope with such harsh environments, the CMS Collaboration is designing a new endcap calorimeter, referred to as the High-Granularity Calorimeters (HGCAL).
As part of the detector upgrade, a novel reconstruction framework (TICL: The Iterative...
We present a project proposal aimed at improving the efficiency and accuracy of Primary Vertex (PV) identification within the ‘A Common Tracking Software’ (ACTS) framework using the deep learning techniques. Our objective is to establish a primary vertex finding algorithm with enhanced performance for the LHC like environments. This work is focused on finding PVs in simulated ACTS data using a...
Reconstructing the trajectories of charged particles from the collection of hits they leave in the detectors of collider experiments like those at the Large Hadron Collider (LHC) is a challenging combinatorics problem and computationally intensive. The ten-fold increase in the delivered luminosity at the upgraded High Luminosity LHC will result in a very densely populated detector environment....
Quantum computing techniques have recently gained significant attention in the field. Compared to traditional computing techniques, quantum computing could offer potential advantages for high-energy physics experiments. Particularly in the era of HL-LHC, effectively handling large amounts of data with modest resources is a primary concern. Particle tracking is one of the tasks predicted to be...
The upgraded LHCb detector has started its Run 3 of data taking in 2022, with a completely overhauled DAQ system, reading out and processing the full detector data at every LHC bunch crossing (30 MHz average rate). At the same, an intense R&D activity is taking place, with the aim of further improving the real-time data processing performance of LHCb, in view of a further luminosity upgrade of...
Graph neural networks have emerged as a powerful tool in various physics studies, particularly in the analysis of sparse and heterogeneous data. However, as the field of particle physics advances towards utilizing graphs in high-luminosity scenarios, a new challenge has emerged: efficient graph creation. While GNN inference is highly optimized, graph creation has not received the same level of...
Finding track segments downstream of the magnet is some of the most important and computationally expensive task of the first stage of the new GPU-based software trigger of the LHCb Upgrade I, that has started operation in Run 3. These segments are essential to form all good physics tracks with a very high precision momentum measurement, when combined with those reconstructed in the vertex...
The success of the CMS physics program at the HL-LHC requires maintaining sufficiently low trigger thresholds to select processes at the electroweak scale. With an average expected 200 pileup interactions, critical to achieve this goal while maintaining manageable trigger rates is in the inclusion of tracking in the L1 trigger. A 40 MHz silicon-tracker based track trigger on the scale of the...
Detecting the signals of very low-pT muons with traditional track reconstruction algorithms, such as Kalman filters, is very challenging. In case of the decay of a tau lepton decaying into three muons, the signature includes three very low pT muons in the forward region of the CMS detector. Some or all of these muons might not carry enough pT to reach all stations of the CMS muon system. Even...
The LHCb Upgrade in Run 3 has changed its trigger scheme for a full software selection in two steps. The first step, HLT1, will be entirely implemented on GPUs and run a fast selection aiming at reducing the visible collision rate from 30 MHz to 1 MHz.
This selection relies on a partial reconstruction of the event. A version of this reconstruction starts with two monolithic tracking...
Track finding in high-density environments is a key challenge for experiments at modern accelerators. In this presentation we describe the performance obtained running machine learning models studied for the ATLAS Muon High Level Trigger. These models are designed for hit position reconstruction and track pattern recognition with a tracking detector, on a commercially available Xilinx FPGA:...
The High-Luminosity LHC (HL-LHC) will provide an order of magnitude increase in integrated luminosity and enhance the discovery reach for new phenomena. The increased pile-up foreseen during the HL-LHC necessitates major upgrades to the ATLAS detector and trigger. The Phase-II trigger will consist of two levels, a hardware-based Level-0 trigger and an Event Filter (EF) with tracking...
The high luminosity upgrade of the LHC aims to better probe the higgs potential and self coupling. The Event Filter task force has been charged with exploring novel approaches to charged particle tracking to be employed in the upgraded ATLAS trigger system, capable of analyzing high luminosity events in real time. We present a neural network (NN) based approach to predicting and identifying...