Mr
Peter Gronbech
(Particle Physics-University of Oxford)
06/09/2011, 14:00
Track 1: Computing Technology for Physics Research
Parallel talk
Monitoring the Grid at local, national, and global levels
The GridPP Collaboration
The World-wide LHC Computing Grid is the computing infrastructure setup to process the experimental data coming from the experiments at the Large Hadron Collider located at CERN.
GridPP is the project that provides the UK part of this infrastructure across 19 sites in the UK. To ensure that these large...
Yves Kemp
(Deutsches Elektronen-Synchrotron (DESY))
06/09/2011, 14:25
Track 1: Computing Technology for Physics Research
Parallel talk
Preserving data from past experiments and preserving the ability to
perform analysis with old data is of growing importance in many
domains of science, including High Energy Physics (HEP). A study group on this issue, DPHEP, has been established in this field to provide guidelines and a structure
for international collaboration on data preservation projects in HEP.
This contribution...
Dr
Federico Stagni
(Conseil Europeen Recherche Nucl. (CERN)), Dr
Philippe Charpentier
(Conseil Europeen Recherche Nucl. (CERN))
06/09/2011, 14:50
Track 1: Computing Technology for Physics Research
Parallel talk
The LHCb computing model was designed in order to support the LHCb physics program, taking into account LHCb specificities (event sizes, processing times etc...). Within this model several key activities are defined, the most important of which are real data processing (reconstruction, stripping and streaming, group and user analysis), Monte-Carlo simulation and data replication. In this...
Dr
Sebastien Binet
(Laboratoire de l'Accelerateur Lineaire (LAL)-Universite de Pari)
06/09/2011, 15:15
Track 1: Computing Technology for Physics Research
Parallel talk
Current HENP libraries and frameworks were written before multicore
systems became widely deployed and used.
From this environment, a 'single-thread' processing model naturally
emerged but the implicit assumptions it encouraged are greatly
impairing our abilities to scale in a multicore/manycore world.
While parallel programming - still in an intensive phase of R&D
despite the 30+...
Dr
Christian Schmitt
(Institut fuer Physik-Johannes-Gutenberg-Universitaet Mainz)
06/09/2011, 16:10
Track 1: Computing Technology for Physics Research
Parallel talk
The reconstruction and simulation of collision events is a major task
in modern HEP experiments involving several ten thousands of
standard CPUs. On the other hand the graphics processors (GPUs) have
become much more powerful and are by far outperforming the standard
CPUs in terms of floating point operations due to their massive
parallel approach. The usage of these GPUs could...
Prof.
Peter R Hobson
(Brunel University)
06/09/2011, 16:35
Track 1: Computing Technology for Physics Research
Parallel talk
In-line holography has recently made the transition from silver-halide based recording media, with laser reconstruction, to recording with large-area pixel detectors and computer-based reconstruction. This form of holographic imaging is used for small particulates, such as cloud or fuel droplets, marine plankton and alluvial sediments, and enables a true 3D object field to be recorded at high...
Dr
jan balewski
(MIT)
06/09/2011, 17:00
Track 1: Computing Technology for Physics Research
Parallel talk
In recent years, Cloud computing has become a very attractive “notion” and
popular model for accessing distributed resources and has emerged as the next
big trend after the so-called Grid computing approach.
The onsite STAR computing resources amounting to about 3000 CPU slots have
been extended by additional 1000 slots using opportunistic resources from pilot
DOE/Magellan and DOE/Nimbus...
Dr
Gerardo Ganis
(CERN), Dr
Sangsu Ryu
(KiSTi Korea Institute of Science & Technology Information (KiS)
06/09/2011, 17:25
Track 1: Computing Technology for Physics Research
Parallel talk
PROOF (Parallel ROOT Facility) is an extention of ROOT enabling interactive analysis in parallel on clusters of computers or a many-core machine. PROOF has been adopted and successfully utilized as one of main analysis models by LHC experiments including ALICE and ATLAS. ALICE has seen growing number of PROOF clusters around the world, CAF at CERN, SKAF in Slovakia, GSIAF at Darmstadt being...