Speaker
Description
The ATLAS and CMS experiments at CERN are planning a second phase of upgrades to prepare for the "High Luminosity LHC", with collisions due to start in 2026. In order to deliver an order of magnitude more data than previous runs, protons at 14 TeV center-of-mass energy will collide with an instantaneous luminosity of 7.5 x 10^34 cm^-2 s^-1, resulting in much higher pileup and data rates than the current experiments were designed to handle. While this is essential to realise the physics programme, it is a huge challenge for the detectors, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition systems. With the written ATLAS Technical Design Report and CMS DAQ and L1 interim Technical Design Reports, the ATLAS baseline and CMS preliminary designs of the TDAQ upgrades will be described. The ATLAS system comprises: a hardware-based low-latency real-time Trigger, Data Acquisition which combines custom readout with commodity hardware and networking, and an Event Filter which combines offline-like algorithms on a large commodity compute service augmented by fast hardware tracking. The CMS trigger system is designed in two levels with the first level including tracking information from the Outer Tracker for the first time. Throughout the system, use of precision algorithms running on FPGAs or commodity hardware are pushed to lower latencies and higher rates than before. Precision calorimeter reconstruction with offline-style clustering and jet-finding in FPGAs, and track reconstruction in Associative Memory and FPGAs are used to combat pileup in the Trigger. The physics motivation and expected performance will be shown for key physics processes.