9–13 Jul 2018
Sofia, Bulgaria
Europe/Sofia timezone

Challenges of processing growing volumes of data for the CMS experiment during the LHC Run2

11 Jul 2018, 12:30
15m
Hall 7 (National Palace of Culture)

Hall 7

National Palace of Culture

presentation Track 3 – Distributed computing T3 - Distributed computing

Speaker

Matteo Cremonesi (Fermi National Accelerator Lab. (US))

Description

In recent years the LHC delivered a record-breaking luminosity to the CMS experiment making it a challenge to successfully handle all the demands for the efficient Data and Monte Carlo processing. In the presentation we will review major issues managing such requests and how we were able to address them. Our main strategy relies on the increased automation and dynamic workload and data distribution. We maximize sharing of CPU resources using an HTCondor-based global pool, which was recently expanded by the dedicated Tier0 resources. To avoid underutilization of Tier-2 sites, we heavily rely on the remote data access (AAA). Multicore resizable jobs reduce the load on the workflow managing tools and improve efficiency across all type of resources. A wide range of opportunistic resources such as the CMS trigger farm, supercomputing centers and cloud resources were integrated into the global pool, which gives reach to more than 250k CPU cores.

Primary authors

Dmytro Kovalskyi (Massachusetts Inst. of Technology (US)) Christoph Paus (Massachusetts Inst. of Technology (US)) Christoph Wissing (Deutsches Elektronen-Synchrotron (DE)) Matteo Cremonesi (Fermi National Accelerator Lab. (US))

Presentation materials