The LHC experiments (ALICE, ATLAS, CMS and LHCb) rely for the data acquisition, processing, distribution, analysis and simulation on complex computing systems, run using a variety of services, provided by the experiment services, the WLCG Grid and the different computing centres. These services range from the most basic (network, batch systems, file systems) to the mass storage services or the Grid information system, up to the different workload management systems, data catalogues and data transfer tools, often internally developed in the collaborations.
In this contribution we review the status of the services most critical to the experiments by quantitatively measuring their readiness with respect to the start of the LHC operations. Shortcomings are identified and common recommendations are offered.
Summary
A very synthetic description of the computing systems of the experiments is given. Then, for each one of them, the degree of criticality assigned by each experiment to the underlying services is defined. For all the most critical services, the level of readiness is assessed with respect to a precisely defined set of metrics addressing all relevant aspects related to the software, the service operations and the service deployment at sites. This information is used to point out aspects which are still unsatisfactory and the effective strategies to address them. Finally, a global evaluation of the critical service status in a fully operative data taking environment is given.
Presentation type (oral | poster) |
Oral
|