4–8 Aug 2015
America/Detroit timezone

The upgraded ATLAS Trigger and DAQ system for the second LHC run

5 Aug 2015, 14:00
23m
Room D (Michigan League)

Room D

Michigan League

LHC Run-2 Detector Performance LHC Run-2 Detector Performance

Speaker

Kevin Black (Boston University)

Description

After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. Due to the increased LHC performance, the ATLAS trigger will have to face roughly five times higher trigger rates. In order to maintain high efficiency to select relevant physics processes, the trigger system is enriched with improvements both in the L1 system, for calorimeter and muon selections and a new topological processor, and with finely optimized HLT algorithms able to identify leptons, hadrons and global event quantities like missing transverse energy. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the Readout system to the HLT and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, the flexible and automatically balanced distribution of the computing resources, the sharing of code and services on nodes. The network system, that connects the HLT processing nodes to the Readout and the storage systems has also evolved, with higher aggregate throughput and port density and enhanced fault tolerance and redundancy, to provide connectivity as required by the new architecture. We will discuss the design choices, the strategies employed to minimize the data-collection and filtering latency, the results of scaling tests done during the commissioning phase and the operational performance after the first months of data taking.
Oral or Poster Presentation Oral

Primary author

Kevin Black (Boston University)

Presentation materials