Speaker
Description
Track reconstruction, a.k.a., tracking, is a crucial part of High Energy Physics experiments. Traditional methods for the task, relying on Kalman Filters, scale poorly with detector occupancy. In the context of the upcoming High Luminosity-LHC, solutions based on Machine Learning (ML) and deep learning are very appealing. We investigate the feasibility of training multiple ML architectures to infer track-defining parameters from detector signals, for the application of offline reconstruction. We study and compare three Transformer model designs, as well as a U-Net model design. Firstly, we consider an autoregressive Transformer with the original encoder-decoder architecture, reconstructing a particle's trajectory given a few initial hits. Secondly, we employ an encoder-only model with the purpose of regressing track parameter values for each hit in an event, followed by a clustering step. Next, an encoder-only model design as a classifier is considered, producing class labels for each hit in an event, given pre-defined bins within the track parameter-space. Lastly, similar to the third Transformer design, a U-Net model for pixel classification into pre-defined classes is evaluated.
The models are benchmarked for physics performance and inference speed on methodically simplified datasets, generated by the recently developed simulation framework, REDuced VIrtual Detector (REDVID). Our second batch of simplified datasets are derived from the TrackML dataset. Our preliminary results show promise for the application of such deep learning techniques on more realistic data for tracking, as well as efficient elimination of solutions.