Speaker
Description
As the High-Luminosity LHC (HL-LHC) era approaches, significant improvements in
reconstruction software are required to keep pace with the increased data rates and
detector complexity. A persistent challenge for high-throughput event reconstruction is
the estimation of track parameters, which is traditionally performed using iterative
Kalman Filter-based algorithms. While GPU-based track finding is progressing rapidly, the
fitting stage remains a bottleneck. The main slowdown is coming from data movement
between CPU and GPU which reduce the benefits of acceleration.
This work investigates a deep learning-based alternative using Transformer architectures
for the prediction of the track parameters. We evaluate the approach in a realistic setting
using the ACTS software framework with the Open Data Detector (ODD) geometry on full
simulation and Kalman Filter for baseline comparison, observing promising results.
Significance
This work addresses a key bottleneck in adapting track reconstruction workflows to GPU-
accelerated computing environments: the lack of a GPU-native, high-performance
alternative to Kalman Filter-based track fitting. This study shows a Transformer-based
regression model applied on samples made with the ACTS framework and realistic
detector conditions. It represents the first application of such a model with the Open Data
Detector and a full reconstruction pipeline.
Experiment context, if any | HL-LHC, ATLAS or CMS, ACTS Open Data Detector |
---|