Franco Brasolin (Universita e INFN (IT))
With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of “Sim@P1” project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating like a single “CERN-‐P1” Grid site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of the system on both infrastructural (hardware, virtualization platform) and logical (site support and job execution) levels are also discussed. The project is a result of combined effort of the ATLAS TDAQ SysAdmins and NetAdmins teams, CERN IT ES Department and RHIC & ATLAS Computing Facility at BNL.
Alessandro Di Girolamo (CERN) Cristian Contescu (Polytechnic University of Bucharest (RO)) Diana Scannicchio (University of California Irvine (US)) Mikel Eukeni Pozo Astigarraga (University of California Irvine (US)) Sergio Ballestrero (University of Johannesburg (ZA)) Dr Silvia Maria Batraneanu (University of California Irvine (US))