Design, Results, Evolution and Status of the ATLAS simulation in Point1 project.

Not scheduled
15m
OIST

OIST

1919-1 Tancha, Onna-son, Kunigami-gun Okinawa, Japan 904-0495
poster presentation Track7: Clouds and virtualization

Speaker

Franco Brasolin (Universita e INFN (IT))

Description

During the LHC long shutdown period (LS1), that started in 2013, the simulation in Point1 (Sim@P1) project takes advantage in an opportunistic way of the trigger and data acquisition (TDAQ) farm of the ATLAS experiment. The farm provides more than 1500 computer nodes, and they are particularly suitable for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2500 virtual machines (VM) provided with 8 CPU cores each, for a total of up to 20000 parallel running jobs. This contribution gives a thorough review of the design, the results and the evolution of the Sim@P1 project operating a large scale Openstack based virtualized platform deployed on top of the ATLAS TDAQ farm computing resources. During LS1, Sim@P1 was one of the most productive GRID sites: it delivered more than 50 million CPU-hours and it generated more than 1.7 billion Monte Carlo events to various analysis communities within the ATLAS collaboration. The particular design aspects are presented: the virtualization platform exploited by the Sim@P1 project permits to avoid interferences with TDAQ operations and, more important, it guarantees the security and the usability of the ATLAS private network. The Cloud infrastructure allows to decouple the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support and handling the job lifecycle) levels. In particular in this note we focus on the operational aspects of such a large system for the upcoming LHC Run 2 period: customized, simple, reliable and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and vice versa to exploit the TDAQ resources when they are not used for the data acquisition, even for short period. We also describe the evolution of the central Openstack infrastructure as it was upgraded from Folsom to Icehouse release and the scalability issues we have addressed. The success of the Sim@P1 project is due to the continuous combined efforts of the ATLAS TDAQ SysAdmins and NetAdmins teams, CERN IT and the RHIC & ATLAS Computing Facility (RACF) at BNL.

Primary authors

Alexandre Zaytsev (Institute for High Energy Physics (RU)) Franco Brasolin (Universita e INFN (IT))

Co-authors

Alessandro Di Girolamo (CERN) Alexey Sedov (Universitat Autònoma de Barcelona) Christopher Jon Lee (University of Johannesburg (ZA)) Cristian Contescu (Polytechnic University of Bucharest (RO)) Daniel Fazio (CERN) Diana Scannicchio (University of California Irvine (US)) Fuqiang Wang (Lawrence Berkeley National Lab. (US)) Matthew Shaun Twomey (University of Washington (US)) Mikel Eukeni Pozo Astigarraga (CERN) Sergio Ballestrero (University of Johannesburg (ZA)) Dr Silvia-Maria Fressard-Batraneanu (CERN)

Presentation materials