Speaker
Description
Track reconstruction is a crucial part of High Energy Physics (HEP) experiments. Traditional methods for the task scale poorly, making machine learning and deep learning appealing alternatives. Following the success of transformers in the field of language processing, we investigate the feasibility of training a Transformer to translate detector signals into track parameters. We study and compare different architectures. Firstly, an autoregressive Transformer model with the original encoder-decoder architecture which reconstructs a particle's trajectory given a few initial hits. Secondly, an encoder-only architecture used as a classifier, producing a class label for each hit in an event, given pre-defined bins within the track parameter space. Lastly, and encoder-only model with the purpose of regressing track parameter values for each hit in an event, followed by clustering.
The Transformer models are benchmarked on simplified datasets generated by the recently developed simulation framework REDuced VIrtual Detector (REDVID) as well as a subset of the TrackML data. The preliminary results of the proposed models show promise for the application of these deep learning techniques on more realistic data for particle reconstruction.
This work has been previously presented at the following conferences: Connecting The Dots 2023 (https://indico.cern.ch/event/1252748/contributions/5521505/), NNV 2023 (https://indico.nikhef.nl/event/4510/contributions/18909/), and ML4Jets2023 (https://indico.cern.ch/event/1253794/contributions/5588602/).