Speaker
Kenneth Bloom
(University of Nebraska-Lincoln)
Description
The CMS computing model relies heavily on the use of "Tier-2"
computing centers. At LHC startup, the typical Tier-2 center will have
1 MSpecInt2K of CPU resources, 200 TB of disk for data storage,
and a WAN connection of at least 1 Gbit/s. These centers will be the
primary sites for the production of large-scale simulation samples
and for the hosting of experiment data for user analysis --
an interesting mix of experiment-controlled and user-controlled tasks.
As a result, there are a wide range of services that must be deployed
and commissioned at these centers, which are responsible for tasks
such as dataset transfer, management of datasets, hosting of jobs
submitted through Grid interfaces, and several varieties of monitoring.
We discuss the development of the seven CMS Tier-2 computing centers
in the United States, with a focus on recent operational performance
and preparations for the start of data-taking at the end of 2007.
Submitted on behalf of Collaboration (ex, BaBar, ATLAS) | CMS |
---|
Author
Kenneth Bloom
(University of Nebraska-Lincoln)