Sep 2 – 9, 2007
Victoria, Canada
Europe/Zurich timezone
Please book accomodation as soon as possible.

BNL dCache Status and Plan

Sep 6, 2007, 5:50 PM
20m
Carson Hall B (Victoria, Canada)

Carson Hall B

Victoria, Canada

oral presentation Computer facilities, production grids and networking Computer facilities, production grids and networking

Speaker

Ms Zhenping Liu (BROOKHAVEN NATIONAL LABORATORY)

Description

BNL ATLAS Computing Facility needs to provide a Grid-based storage system with these requirements: a total of one gigabyte per second of incoming and outgoing data rate between BNL and ATLAS T0, T1 and T2 sites, thousands of reconstruction/analysis jobs accessing locally stored data objects, three petabytes of disk/tape storage in 2007 scaling up to 25 petabytes by 2011, and a cost-effective storage solution. BNL's dCache implementation utilizes three types of storage media: directly attached disk storage to pool nodes via fiber channel to preserve precious data, strategic local disks on worker nodes to provide cost-effective petascale on-line storage, and an HPSS providing archival and redundancy. Dual home GridFtp door nodes bypassing the BNL firewall allow any internal pool node to send and receive data to and from remote users without exposing these nodes to the Internet. We upgraded the critical components such as PNFS and SRM, distributed loads to separate machines, and fine-tuned various parameters, resulting in 300 MB/s between CERN and Tier 1 during the ATLAS data export exercise. We have started exercising two critical data transfer tasks to validate the readiness of BNL USATLAS data storage in terms of stability and performance, namely: 1) for WAN data transfer, running basic transfer (e.g. SRMCP w/ and w/o FTS) and data replications based on ATLAS DDM between BNL and USATLAS Tier 2 sites, and 2) exercising the LAN based dCache Posix I/O functionalities (e.g. dcap, TDCacheFile, and Tfile) and measuring the performance of concurrent read access to the same data set by a large number of analysis jobs. We will also test SRM V2 when it becomes available and implement its storage classes to support ATLAS reconstruction jobs requiring pre-staged RAW data from tape to disk.
Submitted on behalf of Collaboration (ex, BaBar, ATLAS) Brookhaven National Lab, ATLAS Computing Facility

Primary author

Dr Dantong Yu (BROOKHAVEN NATIONAL LABORATORY)

Co-authors

Mr Carlos Gamboa (BROOKHAVEN NATIONAL LABORATORY) Mr Christopher Hollowell (BROOKHAVEN NATIONAL LABORATORY) Dr Hiro Ito (BROOKHAVEN NATIONAL LABORATORY) Dr Jason Smith (BROOKHAVEN NATIONAL LABORATORY) Mr Jay Packard (BROOKHAVEN NATIONAL LABORATORY) John Hover (BROOKHAVEN NATIONAL LABORATORY) Dr Michael Ernst (BROOKHAVEN NATIONAL LABORATORY) Dr Ofer Rind (BROOKHAVEN NATIONAL LABORATORY) Dr Razvan Popescu (BROOKHAVEN NATIONAL LABORATORY) Mr Robert Petkus (BROOKHAVEN NATIONAL LABORATORY) Mrs Wu Yingzi (BROOKHAVEN NATIONAL LABORATORY) Ms Zhenping Liu (BROOKHAVEN NATIONAL LABORATORY)

Presentation materials