Speaker
Description
Summary
ATLAS is a multi-purpose experiment at the LHC at CERN,
which will start taking data in the second half of 2007.
To handle and process the unprecedented data rates expected
at the LHC (e.g. at nominal operation, ATLAS will record about
10 PB of raw data per year) poses a huge challenge on the
computing infrastructure.
The ATLAS Computing Model foresees a multi-tier hierarchical
model to perform this task, with CERN hosting the Tier-0
centre and associated Tier-1, Tier-2, ... centres distributed
around the world.
The role of the Tier-0 centre is to perform prompt
reconstruction of the raw data coming from the trigger farm
(i.e., the so-called Event Filter or level-3 trigger), and
to distribute raw and reconstructed data to the associated
Tier-1 centres. At the Tier-0 centre, raw data will arrive
at a rate of 320 MB/s, data will have to be written to tape
at a rate of 440 MB/s and to be distributed to the Tier-1
centres at about 1000 MB/s. About 3 MSI2k computing power
will be needed to achieve this task.
In this paper we will report on the ATLAS Tier-0 scaling
tests carried out in Q4 of 2005, whose goals were to evaluate
the ATLAS Tier-0 work- and dataflow model, to test the
infrastructure at CERN (CPU resources, mass storage, internal
and outgoing bandwidths, etc.), and to perform Tier-0 operations
up to their nominal rates.