Oct 10 – 14, 2016
San Francisco Marriott Marquis
America/Los_Angeles timezone

Managing the CMS Data and Monte Carlo Processing during LHC Run 2

Oct 10, 2016, 3:15 PM
GG C2 (San Francisco Mariott Marquis)


San Francisco Mariott Marquis

Oral Track 3: Distributed Computing Track 3: Distributed Computing


The CMS Computing and Offline group has put in a number of enhancements into the main software packages and tools used for centrally managed processing and data transfers in order to cope with the challenges expected during the LHC Run 2. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate and the increased collision pileup in the context of the evolution in computing technology. The overall system aims for higher usage efficiency through increased automation and enhanced operational flexibility in terms of data transfers (dynamic) and workflow handeling. The tight coupling of workflow classes to types of sites has been drastically reduced. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs implementing the 'late binding' principle. Besides classical grid resources also some opportunistic resources as well as cloud resources have been integrated into that pool, which gives reach to more than 200k CPU cores.

Secondary Keyword (Optional) Distributed data handling
Primary Keyword (Mandatory) Distributed workload management

Primary author

Christoph Wissing (Deutsches Elektronen-Synchrotron (DE))

Presentation materials