Sep 2 – 9, 2007
Victoria, Canada
Europe/Zurich timezone
Please book accomodation as soon as possible.

Managing ATLAS data on a petabyte-scale with DQ2

Sep 3, 2007, 5:30 PM
Carson Hall C (Victoria, Canada)

Carson Hall C

Victoria, Canada

oral presentation Grid middleware and tools Grid middleware and tools


Mr Mario Lassnig (CERN & University of Innsbruck, Austria)


The ATLAS detector at CERN's Large Hadron Collider presents data handling requirements on an unprecedented scale. From 2008 on the ATLAS distributed data management system (DQ2) must manage tens of petabytes of event data per year, distributed globally via the LCG, OSG and NDGF computing grids, now known as the WLCG. Since its inception in 2005 DQ2 has continuously managed all datasets for the ATLAS collaboration, which now comprises over 3000 scientists participating from more than 150 universities and laboratories in more than 30 countries. Fulfilling its primary requirement of providing a highly distributed, fault-tolerant as well as scalable architecture DQ2 has now been successfully upgraded from managing data on a terabyte-scale to data on a petabyte-scale. We present improvements and enhancements to DQ2 based on the increasing demands for ATLAS data management. We describe performance issues, architectural changes and implementation decisions, the current state of deployment in test and production as well as anticipated future improvements. Test results presented here show that DQ2 is capable of handling data up to and beyond the requirements of full-scale data-taking.

Primary authors

Dr Benjamin Gaidioz (CERN) Dr Birger Koblitz (CERN) Mr Mario Lassnig (CERN & University of Innsbruck, Austria) Dr Massimo Lamanna (CERN) Mr Miguel Branco (CERN) Mr Pedro Salgado (University of Texas at Arlington, USA) Mr Ricardo Rocha (CERN) Dr Vincent Garonne (CERN)

Presentation materials