Dr
Ian Fisk
(Fermi National Accelerator Laboratory (FNAL))
04/11/2008, 16:10
1. Computing Technology
Parallel Talk
In this presentation we will discuss the early experience with the CMS computing model from the last large scale challenge activities to the first days of data taking. The current version of the CMS computing model was developed in 2004 with a focus on steady state running. In 2008 a revision of the model was made to concentrate on the unique challenges associated with the commission period....
Mr
Michal ZEROLA
(Nuclear Physics Inst., Academy of Sciences, Praha)
04/11/2008, 16:35
1. Computing Technology
Parallel Talk
In order to achieve both fast and coordinated data transfer to collaborative sites as well as to create a distribution of data over multiple sites, efficient data movement is one of the most essential aspects in distributed environment. With such capabilities at hand, truly distributed task scheduling with minimal latencies would be reachable by internationally distributed collaborations (such...
Dr
Valeri FINE
(BROOKHAVEN NATIONAL LABORATORY)
04/11/2008, 17:00
1. Computing Technology
Parallel Talk
With the era of multi-core CPUs, software parallelism is becoming both affordable as well as a practical need. Especially interesting is to re-evaluate the adaptability of the high energy and nuclear physics sophisticated, but time-consuming, event reconstruction frameworks to the reality of the multi-threaded environment.
The STAR offline OO ROOT-based framework implements a well known...
Mr
Andreas Joachim Peters
(CERN)
04/11/2008, 17:25
1. Computing Technology
Parallel Talk
One of the biggest challenges in LHC experiments at CERN is data management for data analysis. Event tags and iterative looping over datasets for physics analysis require many file opens per second and (mainly forward) seeking access. Analyses will typically access large datasets reading terabytes in a single iteration.
A large user community requires policies for space management and a...
Mr
Tim Muenchen
(Bergische Universitaet Wuppertal)
04/11/2008, 17:50
1. Computing Technology
Parallel Talk
As the Large Hadron Collider (LHC) at CERN, Geneva, has begun operation in september, the large scale computing grid LCG (LHC Computing Grid) is meant to process and store the large amount of data created in simulating, measuring and analyzing of particle physic experimental data. Data acquired by ATLAS, one of the four big experiments at the LHC, are analyzed using compute jobs running on the...