Speaker
Alessandro De Salvo
(Universita e INFN, Roma I (IT))
Description
In the Atlas experiment, the calibration of the precision tracking chambers of the
muon detector is very demanding, since the rate of muon tracks required to get a
complete calibration in homogeneous conditions and to feed prompt reconstruction
with fresh constants is very high (several hundreds Hz for 8-10 hours runs). The
calculation of calibration constants is highly CPU consuming. In order to fulfill the
requirement of completing the cycle and having the final constants available within
24 hours, distributed resources at Tier-2 centers have been allocated.
The best place to get muon tracks suitable for detector calibration is the second level
trigger, where the pre-selection of data sitting in a limited region by the first level
trigger via the Region of Interest mechanism allows selecting all the hits from a single
track in a limited region of the detector. Online data extraction allows calibration data
collection without performing special runs. Small event pseudo-fragments (about 0.5
kB) built at the muon level-1 rate (2-3 kHz at the beginning of 2012 run, to become
10-12 kHz at maximum LHC luminosity) are then collected in parallel by a dedicated
system, without affecting the main data taking, and sent to the Tier-0 computing
center at CERN.
The computing resources needed to calculate the calibration constants are distributed
through three calibration centers (Rome, Munich, Ann Arbor) for the tracking device
and one (Napoli) for the trigger chambers. From Tier-0, files are directly sent to the
calibration centers through the ATLAS Data Distribution Manager.
At the calibration centers, data is split per trigger tower and distributed to computing
nodes for concurrent processing (~250 cores are currently used at each center). A two-stage processing is performed, the first stage reconstructing tracks and creating ntuples, the second one calculating constants. The calibration parameters are then
stored in the local calibration database and replicated to the main condition database
at CERN, which makes them available for data analysis within 24 hours from data
extraction.
The architecture and performance of this system during the 2011-2012 data taking
will be presented.
This system will evolve in the next future to comply with the new stringent
requirements of the LHC and ATLAS upgrade. If for the WAN distribution part the
availability of bandwidth is already much larger than needed for this task and the
CPU power can be increased according to our need, the online part will follow the
evolution of the ATLAS TDAQ architecture. In particular, the current model foresees
the merging of the level-2 and event filtering processes on the same nodes, allowing
the simplification of the system and a more flexible and dynamic resource
distribution. Two possible architectures are possible to comply with this model;
possible implementation will be discussed.
Primary author
Dr
Enrico Pasqualucci
(INFN Roma)