Nicoletta Garelli (SLAC)
The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of such upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. The TDAQ system used to date is organised in a three-level selection scheme, including a hardware-based first-level trigger and second- and third-level triggers implemented as separate software systems distributed on commodity hardware nodes. The second-level trigger operates over limited regions of the detector, the so-called Regions-of-Interest (RoI). The third-level trigger deals instead with complete events. While this architecture was successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. With higher luminosities, the required number and complexity of Level-1 triggers will increase in order to satisfy the physics goals of ATLAS, while keeping the total Level-1 rates at or below 100kHz. The Central Trigger Processor will be upgraded to increase the number of manageable inputs and accommodate additional hardware for improved performance, and a new Topological Processor will be included in the slice. This latter will apply selections based either on geometrical information, like angles between jets/leptons, or even more complex observables to further optimize the selection at this trigger stage. Concerning the high-level trigger, the main step in the current plan is to deploy a single homogeneous system, which merges the execution of the second and third trigger levels, still logically separated, on a unique hardware node. This design has many advantages, among which: the radical simplification of the architecture, the flexible and automatically balanced distribution of the computing resources, the sharing of code and services on nodes. Furthermore, the full treatment of the HLT selection on a single node enables both further optimisations, e.g. the caching of event fragments already collected for RoI-based processing, and new approaches giving better balancing of the selection steps before and after the event building. Prototyping efforts already demonstrated many of these benefits. In this paper, we report on the design and the development status of the upgraded trigger system, with particular attention to the tests currently on-going to identify the required performance and to spot its possible limitations.