Speaker
D. Malon
(ANL)
Description
As ATLAS begins validation of its computing model in 2004, requirements
imposed upon ATLAS data management software move well beyond simple persistence,
and beyond the "read a file, write a file" operational model that has sufficed for
most simulation production. New functionality is required to support the
ATLAS Tier 0 model, and to support deployment in a globally distributed environment
in which the preponderance of computing resources--not only CPU cycles but
data services as well--reside outside the host laboratory.
This paper takes an architectural perspective in describing new developments in ATLAS
data management software, including the ATLAS event-level metadata system and related
infrastructure, and the mediation services that allow one to distinguish writing from
registration and selection from retrieval, in a manner that is consistent both for
event data and for time-varying conditions. The ever-broader role of databases and
catalogs, and issues relatedto the distributed deployment thereof, are also
addressed.
Primary authors
A. Schaffer
(LAL Orsay)
D. Malon
(ANL)