Speakers
Description
For the High-Luminosity Large Hadron Collider era, the trigger and data acquisition system of the Compact Muon Solenoid experiment will be entirely replaced. Novel design choices have been explored, including ATCA prototyping platforms with SoC controllers and newly available interconnect technologies with serial optical links with data rates up to 28 Gb/s. Trigger data analysis will be performed through sophisticated algorithms, including widespread use of Machine Learning, in large FPGAs, such as the Xilinx Ultrascale family. The system will process over 50 Tb/s of detector data with an event rate of 750 kHz. The talk will discuss the technological and algorithmic aspects of the upgrade of the CMS trigger system, emphasizing the use of low-latency Machine Learning and AI algorithms with several examples.
Since the beginning of Run 3 of LHC the upgraded LHCb experiment is using a triggerless readout system collecting data at an event rate of 30 MHz and a data rate of 4 TB/s. The trigger system is split in two high-level trigger (HLT) stages. During the first stage (HLT1), implemented on GPGPUs, track reconstruction and vertex fitting for charged particles is performed to reduce the event rate to 1 MHz, where the events are buffered to a disk. In the second stage (HLT2), deployed on a CPU server farm, a full offline-quality reconstruction of charged and neutral particles and their selection is performed, aided by the detector alignment and calibration run in quasi-real time on buffered events. This allows to use the output of the trigger directly for offline analysis. In this talk we will give a review of the implementation and challenges of the heterogenous LHCb trigger system, discuss the operational experience and first results of Run 3 together with the prospects for the High-Luminosity LHC era.