Speaker
Lassi Tuura
(Northeastern University)
Description
The CMS experiment will need to sustain uninterrupted high reliability, high throughput and very diverse data
transfer activities as the LHC operations start. PhEDEx, the CMS data transfer system, will be responsible for the
full range of the transfer needs of the experiment. Covering the entire spectrum is a demanding task: from the
critical high-throughput transfers between CERN and the Tier-1 centres, to high-scale production transfers among
the Tier-1 and Tier-2 centres, to managing the 24/7 transfers among all the 170 institutions in CMS and to
providing straightforward access to handful of files to individual physicists.
In order to produce the system with confirmed capability to meet the objectives, the PhEDEx data transfer
system has undergone rigourous development and numerous demanding scale tests. We have sustained
production transfers exceeding 1 PB/month for several months and have demonstrated core system capacity
several orders of magnitude above expected LHC levels.
We describe the level of scalability reached, and how we got there, with focus on the main insights into
developing a robust, lock-free and scalable distributed database application, the validation stress test methods we
have used, and the development and testing tools we found practically useful.
Submitted on behalf of Collaboration (ex, BaBar, ATLAS) | CMS |
---|
Author
Lassi Tuura
(Northeastern University)
Co-authors
Brian Bockelman
(University of Nebraska–Lincoln)
Daniele Bonacorsi
(INFN-CNAF, Bologna, Italy)
Derek Feichtinger
(PSI)
Jens Rehn
(CERN)
Ricky Egeland
(University of Minnesota – Twin Cities)
Simon Metson
(University of Bristol)