Speaker
Description
In 2026 the Phase-II Upgrade will enhance the LHC to become the High Luminosity LHC. Its luminosity will be up to 7 times of the nominal LHC luminosity. This leads to an increase in interesting events which might open the door to detect new physics. However, it also leads to a major increase in proton-proton collisions with mostly low energetic hadronic particles, called pile-up. Up to 200 simultaneous collisions per LHC bunch crossing are expected. This puts higher demands on the ATLAS detector electronics and real-time data processing capabilities. The Liquid Argon calorimeter measures the energy of particles that are produced in LHC collisions. These energies are used by the trigger to decide whether events might be interesting and therefore worth saving for further investigations or not. The computation of the deposited energy is done in real-time on FPGAs, which are chosen due to their capacity to process large amounts of data with a very low latency. At the moment, the energy is calculated by an optimal filtering algorithm. This filter algorithm was adapted for LHC conditions with low pile-up, but studies under High Luminosity LHC conditions showed a significant decrease in performance. Especially a new trigger scheme that will allow trigger accept signals in successive LHC bunch crossings will challenge the energy readout. It could be shown that neither further extensions nor tuning of the optimal filter could improve the performance. That is why more sophisticated algorithms such as artificial neural networks came into focus. Convolutional neural networks have proven to be a promising alternative. However, the computational power that is available on the FPGA is tightly limited. Therefore, these networks need to have a low resource consumption. We developed networks that not only fulfill these requirements but also show performance improvements under various signal conditions. Especially for overlapping signals the convolutional neural networks outperform the legacy filter algorithm. Two types of network architectures will be discussed. The first type uses dilation to enlarge its field of view. This allows the network to get more information from past signal occurrences whereas at the same time keeps the total number of network parameters low. The other architecture uses a so-called tagging layer to first detect signal overlaps and then calculates the energy with the additional information. Their yield with respect to different performance measures will be compared to the legacy system. Furthermore, their semi-automated implementation in firmware will be presented. Calculations on the FPGA use fixed-point precision arithmetic, which is why quantization aware training is applied. Performance enhancements utilize time division multiplexing as well as bit width optimization. We show that the stringent processing requirements on the latency (at the order of 100 ns) can be achieved. Implementation results based on INTEL Agilex FPGAs will be shown, including resource usage and operation frequency.