Speaker
Alessandro Di Girolamo
(CERN)
Description
The ATLAS Distributed Computing infrastructure has evolved after the first
period of LHC data taking in order to cope with the challenges of the
upcoming LHC Run2. An increased data rate and computing demands of the
Monte-Carlo simulation, as well as new approaches to ATLAS analysis,
dictated a more dynamic workload management system (ProdSys2) and data
management system (Rucio), overcoming the boundaries imposed by the design
of the old computing model. In particular, the commissioning of new
central computing system components was the core part of the migration
toward the flexible computing model. The flexible computing utilization
exploring the opportunistic resources such as HPC, cloud, and volunteer
computing is embedded in the new computing model, the data access
mechanisms have been enhanced with the remote access, and the network
topology and performance is deeply integrated into the core of the system.
Moreover a new data management strategy, based on defined lifetime for
each dataset, has been defined to better manage the lifecycle of the data.
In this note, the overview of the operational experience of the new system
and its evolution is presented
Author
Co-author
Yuji Yamazaki
(Kobe University (JP))