The ATLAS Trigger Core Configuration and Execution System in Light of the ATLAS Upgrade for LHC Run 2

Not scheduled
15m
OIST

OIST

1919-1 Tancha, Onna-son, Kunigami-gun Okinawa, Japan 904-0495
poster presentation Track1: Online computing

Speaker

Lukas Alexander Heinrich (New York University (US))

Description

During the 2013/14 shutdown of the Large Hadron Collider (LHC) the ATLAS first level trigger (L1T) and the data acquisition system (DAQ) were substantially upgraded to cope with the increase in luminosity and collision multiplicity, expected to be delivered by the LHC in 2015. To name a few, the L1T was extended on the calorimeter side (L1Calo) to better cope with pile-up and apply better-tuned isolation criteria on electron, photon, and jet candidates. The central trigger (CT) was widened to analyze twice as many inputs, provide more trigger lines, and serve multiple sub-detectors in parallel during calibration periods. A new FPGA-based trigger, capable of analyzing event topologies at 40 MHz, was added to provide further input to forming the level 1 trigger decision (L1Topo). On the DAQ side the dataflow was completely remodeled, merging the two previously existing stages of the software-based high level trigger into one. Partially because of these changes, partially because of the new trigger paradigm to have more full event analysis, the high level trigger (HLT) execution framework and the trigger configuration system had to be upgraded, tools and data content had to be adapted to the new ATLAS analysis model. In this paper we describe this work: The algorithm execution framework was changed to seamlessly work within the merged HLT, the data access providers were adapted to the new dataflow. The event building, at which point all data are retrieved from the readout system, can now dynamically change with progressing event feature extraction, allowing a more flexible adjustment to dataflow constraints. The cost monitoring framework which analyzes data access and CPU consumption, even prior to data taking, was extended to work within the merged system, several other improvements followed. The HLT execution was moved to a memory-saving multi-process application, in which many event processors are forked after the system configuration. They thus share common data such as geometry and conditions information, leading to a dramatic reduction in the overall memory consumption. Compared to Run 1 many more event processors can run on each machine. Upon request from the ATLAS physics groups a new kind of data stream was implement, in which only a small subset of the reconstructed trigger objects and no detector data is stored. Since the trigger reconstruction in Run 2 almost compares to the offline in resolution, these data make search analyses that require high statistics feasible. As a consequence of these changes, the new ATLAS data model, and the new dual environment analysis approach, the tools that are provided for trigger aware analysis had to be completely restructured. In particular the reduction and specialization of data content in derived data sets was posing a challenge for the trigger, a new trigger data slimming was invented. The database driven trigger configuration system, which describes the physics implemented at L1 and HLT, needs to reflect all changes in the L1 and HLT system. It now incorporates the configuration for the new L1Topo trigger, has extended the configuration capabilities of the L1Calo and CT and the describes are merged HLT. A new system for automatically adjust the trigger prescale factors to the dropping luminosity during a run was devised and implemented. We also present measurements of the trigger execution on first data with the new ATLAS trigger algorithms and selection.

Primary author

Lukas Alexander Heinrich (New York University (US))

Presentation materials