Speaker
Ondrej Penc
(Acad. of Sciences of the Czech Rep. (CZ))
Description
The performance of the ATLAS Inner Detector (ID) Trigger algorithms
being developed for running on the ATLAS High Level Trigger (HLT)
processor farm during Run 2 of the LHC are presented. During the
2013-14 LHC long shutdown modifications are being carried out to the
LHC accelerator to increase both the beam energy and luminosity. These
modifications will pose significant challenges for the ID Trigger
algorithms, both in terms execution time and physics performance. To
meet these challenges, the ATLAS HLT software is being restructured to
run as a more flexible single stage HLT, instead of two separate
stages (Level2 and Event Filter) as in Run 1. This will reduce the
overall data volume that needs to be requested by the HLT system,
since data will no longer need to be requested for each of the two
separate processing stages.
Development of the ID Trigger algorithms for Run 2, currently expected
to be ready for detector commissioning near the end of 2014, is
progressing well and the current efforts towards optimising the
operational performance of these algorithms is discussed. The new
tracking strategy employed for Run 2 will use a Fast Track Finder
(FTF) algorithm to seed subsequent precision tracking, and will result
in an improved track parameter resolution and faster exe- cution times
than achieved during Run 1. This will be achieved without compromising
the algorithm robustness with respect to the expected increase in
multiplicity of separate proton- proton interactions (pileup) per LHC
bunch crossing.
The performance of the new algorithms has been evaluated using an
extensive suite of profiling tools to identify those aspects where
code optimisation would be most beneficial. The methods used to
extract accurate timing information for each execution step are
described, as well as the analysis of per-call level profiling data
and the sampling of hardware counters to study the efficiency of CPU
utilisation. In addition, a summary of the effective optimisation
steps that have been applied to the new algorithms are discussed. The
profiling infrastructure, constructed to provide prompt feedback from
the optimisation, is described, including the meth- ods used to
monitor the relative performance improvements as the code
evolves. This is with a view to understanding how the profiling and
optimisation testing methods might be extended to other ATLAS software
development.
The increased use of parallelism for HLT algorithm processing has also
been explored. Possible new opportunities arising from explicit code
vectorisation and the potential inclusion of co-processors to
accelerate performance in key sections of the online tracking
algorithms are also discussed.
Primary author
Dr
Fabrizio Salvatore
(University of Sussex (GB))