Dr Alexandre Vaniachine (Argonne)
ATLAS detector is in the second year of the LHC long run. A starting point for ATLAS physics analysis is data reconstruction. Following the prompt reconstruction, the ATLAS data is reprocessed. The reprocessing allows reconstruction of the data with improved software and updated calibrations, which provides the coherence and improves physics quality of the reconstructed data. The large-scale data reprocessing campaigns are conducted on the Grid. Computing centers around the world participate in reprocessing providing tens of thousands of CPU-cores for a faster throughput. Reprocessing relies upon underlying ATLAS tools and technologies assuring reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, and petascale data integrity control. Same tools empower ATLAS physics groups in further data processing steps on the Grid. These tools were adopted for the trigger reprocessing required to validate new trigger menus or other trigger changes critical during 2011 LHC operations. To facilitate physics discoveries we must minimize unrecoverable event losses. This is assured by automated resubmission of the failed data processing jobs, which excludes transient failures. The events that were not possible to reconstruct during the reprocessing campaign are recovered shortly in a dedicated post-processing step using an updated software release and/or conditions. We describe ATLAS technologies developed to eliminate the event losses in Petascale data processing and present the experience of large-scale data reprocessing campaigns and group data processing on the Grid.
We present the experience of large-scale ATLAS data reprocessing campaigns and group data processing on the Grid.