-
Mr Peter Gronbech (Particle Physics-University of Oxford)06/09/2011, 14:00Track 1: Computing Technology for Physics ResearchParallel talkMonitoring the Grid at local, national, and global levels The GridPP Collaboration The World-wide LHC Computing Grid is the computing infrastructure setup to process the experimental data coming from the experiments at the Large Hadron Collider located at CERN. GridPP is the project that provides the UK part of this infrastructure across 19 sites in the UK. To ensure that these large...Go to contribution page
-
Yves Kemp (Deutsches Elektronen-Synchrotron (DESY))06/09/2011, 14:25Track 1: Computing Technology for Physics ResearchParallel talkPreserving data from past experiments and preserving the ability to perform analysis with old data is of growing importance in many domains of science, including High Energy Physics (HEP). A study group on this issue, DPHEP, has been established in this field to provide guidelines and a structure for international collaboration on data preservation projects in HEP. This contribution...Go to contribution page
-
Dr Federico Stagni (Conseil Europeen Recherche Nucl. (CERN)), Dr Philippe Charpentier (Conseil Europeen Recherche Nucl. (CERN))06/09/2011, 14:50Track 1: Computing Technology for Physics ResearchParallel talkThe LHCb computing model was designed in order to support the LHCb physics program, taking into account LHCb specificities (event sizes, processing times etc...). Within this model several key activities are defined, the most important of which are real data processing (reconstruction, stripping and streaming, group and user analysis), Monte-Carlo simulation and data replication. In this...Go to contribution page
-
Dr Sebastien Binet (Laboratoire de l'Accelerateur Lineaire (LAL)-Universite de Pari)06/09/2011, 15:15Track 1: Computing Technology for Physics ResearchParallel talkCurrent HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a 'single-thread' processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. While parallel programming - still in an intensive phase of R&D despite the 30+...Go to contribution page
-
Dr Christian Schmitt (Institut fuer Physik-Johannes-Gutenberg-Universitaet Mainz)06/09/2011, 16:10Track 1: Computing Technology for Physics ResearchParallel talkThe reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could...Go to contribution page
-
Prof. Peter R Hobson (Brunel University)06/09/2011, 16:35Track 1: Computing Technology for Physics ResearchParallel talkIn-line holography has recently made the transition from silver-halide based recording media, with laser reconstruction, to recording with large-area pixel detectors and computer-based reconstruction. This form of holographic imaging is used for small particulates, such as cloud or fuel droplets, marine plankton and alluvial sediments, and enables a true 3D object field to be recorded at high...Go to contribution page
-
Dr jan balewski (MIT)06/09/2011, 17:00Track 1: Computing Technology for Physics ResearchParallel talkIn recent years, Cloud computing has become a very attractive “notion” and popular model for accessing distributed resources and has emerged as the next big trend after the so-called Grid computing approach. The onsite STAR computing resources amounting to about 3000 CPU slots have been extended by additional 1000 slots using opportunistic resources from pilot DOE/Magellan and DOE/Nimbus...Go to contribution page
-
Dr Gerardo Ganis (CERN), Dr Sangsu Ryu (KiSTi Korea Institute of Science & Technology Information (KiS)06/09/2011, 17:25Track 1: Computing Technology for Physics ResearchParallel talkPROOF (Parallel ROOT Facility) is an extention of ROOT enabling interactive analysis in parallel on clusters of computers or a many-core machine. PROOF has been adopted and successfully utilized as one of main analysis models by LHC experiments including ALICE and ATLAS. ALICE has seen growing number of PROOF clusters around the world, CAF at CERN, SKAF in Slovakia, GSIAF at Darmstadt being...Go to contribution page
Choose timezone
Your profile timezone: