Speakers
Description
The LHCb experiment at CERN operates a high precision and robust tracking system to reach its physics goals, including precise measurements of CP-violation phenomena in the heavy flavour quark sector and searches for New Physics beyond the Standard Model. Since Run2, the experiment has put in place a new trigger strategy with a real-time reconstruction, alignment and calibration, imposing strong constraints to the execution of the track reconstruction in terms of timing and throughput. In order to face these constraints, fast machine learning techniques are used in the reconstruction algorithms: they allow to reject fake tracks at an early stage, making the execution faster and additionally providing tracks of better quality. The latest example in this direction is provided by the so-called downstream algorithm, reconstructing tracks from long-lived particles, i.e. particles decaying after the vertex detector. Adopted since 2015 in Run II data taking, its performance, dependent on the purity of the reconstructed track samples, has been improved in 2016 by using two filters, based on a binned Boosted Decision Tree (bBDT) and a neural network.
In this talk, the computational intelligence aspects of the track reconstruction in LHCb will be discussed, with a focus on the adaptation of the employed machine learning techniques to the real-time high level trigger environment.
Intended contribution length | 20 minutes |
---|