Dr
Andrea Sciaba'
(CERN, Geneva, Switzerland)
05/11/2008, 14:00
1. Computing Technology
Parallel Talk
The computing system of the CMS experiment works using distributed resources from more than 80 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure.
CMS has established a procedure to extensively test all...
Dr
André dos Anjos
(University of Wisconsin, Madison, USA)
05/11/2008, 14:25
1. Computing Technology
Parallel Talk
The DAQ/HLT system of the ATLAS experiment at CERN, Switzerland, is being commissioned for first collisions in 2009. Presently, the system is composed of an already very large farm of computers that accounts for about one-third of its event processing capacity. Event selection is conducted in two steps after the hardware-based Level-1 Trigger: a Level-2 Trigger processes detector data based on...
Alexander Kryukov
(Skobeltsyn Institute for Nuclear Physics Moscow State University)
05/11/2008, 14:50
1. Computing Technology
Grid systems are used for calculations and data processing in various applied areas such as biomedicine, nanotechnology and materials science, cosmophysics and high energy physics as well as in a number of industrial and commercial areas. However, one of the basic problems costing on a way to wide use of grid systems is related to the fact that applied jobs, as a rule, are developed for...
David Cameron
(University of Oslo)
05/11/2008, 15:15
1. Computing Technology
Parallel Talk
The NorduGrid collaboration and its middleware product, ARC (the Advanced Resource Connector), span institutions in Scandinavia and several other countries in Europe and the rest of the world. The innovative nature of the ARC design and flexible, lightweight distribution make it an ideal choice to connect heterogeneous distributed resources for use by HEP and non-HEP applications alike. ARC...
Dr
Mohammad Al-Turany
(GSI DARMSTADT)
05/11/2008, 16:35
1. Computing Technology
Parallel Talk
The new development in the FairRoot framework will be presented. FairRoot is the simulation and anaysis framework used by CBM and PANDA at FAIR/GSI experiments. The CMake based building and testing system will be described. A new event display based on EVE-package from ROOT and Geane will be shown, also the new developments for using GPUs and multi-core systems will be discussed.
Gero Flucke
(Universität Hamburg)
05/11/2008, 17:00
1. Computing Technology
Parallel Talk
The ultimate performance of the CMS detector relies crucially on precise and prompt alignment and calibration of its components. A sizable number of workflows need to be coordinated and performed with minimal delay through the use of a computing infrastructure which is able to provide the constants for a timely reconstruction of the data for subsequent physics analysis. The framework...
Dario Berzano
(Istituto Nazionale di Fisica Nucleare (INFN) and University of Torino)
05/11/2008, 17:25
1. Computing Technology
Parallel Talk
Current Grid deployments for LHC computing (namely the WLCG infrastructure) do not allow efficient parallel interactive processing of data. In order to allow physicists to interactively access subsets of data (e.g. for algorithm tuning and debugging before running over a full dataset) parallel Analysis Facilities based on PROOF have been deployed by the ALICE experiment at CERN and elsewhere....
David Lange
(LLNL)
05/11/2008, 17:50
1. Computing Technology
Parallel Talk
The offline software suite of the CMS experiment must support the production and analysis activities across a distributed computing environment. This system relies on over 100 external software packages and includes the developments of more than 250 active developers. This system requires consistent and rapid deployment of code releases, a stable code development platform, and efficient tools...