Distributed analysis in ATLAS

Apr 13, 2015, 5:45 PM
B503 (B503)



oral presentation Track5: Computing activities and Computing models Track 5 Session


Federica Legger (Ludwig-Maximilians-Univ. Muenchen (DE))


The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data for the distributed physics community is a challenging task. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are daily running on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include first measurements of DA performances in the first months of the second LHC run, starting early 2015.

Primary author

Alastair Dewhurst (STFC - Rutherford Appleton Lab. (GB))


Federica Legger (Ludwig-Maximilians-Univ. Muenchen (DE))

Presentation materials