Speaker
M. Ernst
(DESY)
Description
The LHC needs to achieve reliable high performance access to vastly distributed storage resources across the
network. USCMS has worked with Fermilab-CD and DESY-IT on a storage service that was deployed at several
sites. It provides Grid access to heterogeneous mass storage systems and synchronization between them. It
increases resiliency by insulating clients from storage and network failures, and facilitates file sharing and
network traffic shaping.
This new storage service is implemented as a Grid Storage Element (SE). It consists of dCache as the core
storage system and an implementation of the Storage Resource Manager (SRM), that together allow both local
and Grid based access to the mass storage facilities. It provides advanced functionalities for managing,
accessing and distributing collaboration data.
USCMS is using this system both as Disk Resource Manager at Tier-1 and Tier-2 sites, and as Hierarchical
Resource Manager with Enstore as tape back-end at the Fermilab Tier-1. It is used for providing shared
managed disk pools at sites and for streaming data between the CERN Tier-0, the Fermilab Tier-1 and U.S.
Tier-2 centers
Applications can reserve space for a time period, ensuring space availability when the application runs. Worker
nodes without WAN connection can trigger data replication to the SE and then access data via the LAN. Moving
the SE functionality off the worker nodes reduces load and improves reliability of the compute farm elements
significantly.
We describe architecture, components, and experience gained in CMS production and the DC04 Data
Challenge.
Primary authors
D. Petravick
(FERMILAB)
I. Fisk
(FERMILAB)
J. Bakken
(FERMILAB)
M. Ernst
(DESY)
P. Fuhrmann
(DESY)
T. Mkrtchyan
(DESY)
T. Perelmutov
(FERMILAB)