Improvements in the CMS Computing System for Run2

13 Apr 2015, 14:15
15m
B503 (B503)

B503

B503

oral presentation Track5: Computing activities and Computing models Track 5 Session

Speaker

Ian Fisk (Fermi National Accelerator Lab. (US))

Description

Beginning in 2015 CMS will collected and produce data and simulation adding to 10B new events a year. In order to realize the physics potential of the experiment these events need to be stored, processed, and delivered to analysis users on a global scale. CMS has 150k processor cores and 80PB of disk storage and there is constant pressure to reduce the resources needed and increase the efficiency of usage. In this presentation we will comprehensively overview the improvements made in the computing system for Run2 by CMS in the areas of data and simulation processing, data distribution, data management and data access. The system has been examined and we will discuss the improvements in the entire data and workflow systems: CMS processing processing and analysis workflow tools, the development and deployment of dynamic data placement infrastructure, and progress toward operating a global data federation. We will describe the concepts and approaches to utilize the variety of CMS CPU resources, ranging from established Grid sites to HPC centers, Cloud resources and CMS' own High Level Trigger farm. We will explain the strategy for improving how effectively the storage is used and the commissioning, validation and challenge activities will be presented.

Primary authors

Ian Fisk (Fermi National Accelerator Lab. (US)) Dr Maria Girone (CERN)

Presentation materials