Speaker
Mr
Pavel JAKL
(Nuclear Physics Inst., Academy of Sciences - Czech Republic)
Description
With its increasing data samples, the RHIC/STAR experiment has faced a challenging
data management dilemma: solutions using cheap disks attached to processing nodes
have rapidly become economically beneficial over standard centralized storage. At
the cost of data management, the STAR experiment moved to a multiple component
locally distributed data model rendered viable by the introduction of a scalable
replica catalog, and a home-brewed data replication and data management system.
Access to the data was then provided via the rootd TNetfile integrated API-based.
However, the reliability of the system has its flaws and STAR has moved to an Xrootd
based infrastructure, Initially used in the Babar experiment for its data
management, STAR and BNL, with its 650 nodes deployment, has to the largest use of
Xrootd data-servers in the world. We will report in this paper our model and
configuration, explain the previous and current approach and the reasons for the
migration as well as resenting our experience in deploying and testing Xrootd.
Finally, we will introduce our plan toward a full grid solution and the future
incarnation of our approach: the merging of two technologies and the best of two
worlds – Xrootd and SRM. This will enable the dynamic management of disk storage at
the xrootd nodes, as well as provide transparent access to remote storage nodes,
including mass storage systems.
Primary authors
Mr
Alex SIM
(Lawrence Berkeley Laboratory)
Dr
Andrew Hanushevsky
(Stanford Linear Accelerator Center)
Mr
Arie SHOSHANI
(Lawrence Berkeley Laboratory)
Dr
Jerome LAURET
(BROOKHAVEN NATIONAL LABORATORY)
Mr
Pavel JAKL
(Nuclear Physics Inst., Academy of Sciences - Czech Republic)