13–17 Feb 2006
Tata Institute of Fundamental Research
Europe/Zurich timezone

The CMS Computing Model

15 Feb 2006, 17:40
20m
AG 80 (Tata Institute of Fundamental Research)

AG 80

Tata Institute of Fundamental Research

Homi Bhabha Road Mumbai 400005 India
oral presentation Distributed Event production and processing Distributed Event production and Processing

Speaker

Dr Jose Hernandez (CIEMAT)

Description

(For the CMS Collaboration) Since CHEP04 in Interlaken, the CMS experiment has developed a baseline Computing Model and a Technical Design for the computing system it expects to need in the first years of LHC running. Significant attention was focused on the development of a data model with heavy streaming at the level of the RAW data based on trigger physics selections. We expect that this will allow maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, as well as many Tier-2 centres. The workflows involving these centres have been identified, along with baseline architectures for the data management. This presentation will describe the computing and data model, give an overview of the technical design and describe the current status of the CMS computing system.

Primary author

Dr Peter Elmer (PRINCETON UNIVERSITY)

Presentation materials