Speaker
Description
The PV-finder algorithm employs a hybrid deep neural network to reconstruct primary vertex positions (PVs) in proton-proton collisions at the LHC. The algorithm was originally developed for use in LHCb, but it has been adapted successfully for use in the much higher pile-up environment of ATLAS. PV-finder integrates fully connected layers that do track-by-track calculations with a convolutional neural network to predict “target histograms” from which PV positions are extracted using a simple heuristic algorithm. The LHCb version of PV-finder has efficiency greater than 97% with a false positive rate near 0.03 per event. LHCb uses a software-only trigger in Run 3. The first level trigger (Hlt1) has been implemented on GPUs in the Allen software framework. PV-finder was developed using PyTorch, and deploying its inference engine in Allen presents a number of challenges. Allen schedules its threads and has its own memory management. The LibTorch and CuDNN libraries schedule threads themselves and expect to allocate memory, so cannot be used directly in Allen. Instead, a translational layer converts methods from CuDNN to equivalent methods that work inside Allen. The design of the full inference engine deployed in Allen and its performance will be discussed, including the design of the translational layer.