-
Michael Ernst (Unknown)21/05/2012, 08:30
-
Dr Paul Horn (NYU Distinguished Scientist in Residence & Senior Vice Provost for Research)21/05/2012, 08:35
-
Mr Glen Crawford (DOE Office of Science)21/05/2012, 08:45
-
Prof. Joe Incandela (UCSB)21/05/2012, 09:30
-
Dr Rene Brun (CERN)21/05/2012, 10:30
-
Wesley Smith (University of Wisconsin (US))21/05/2012, 11:00
-
Mr Forrest Norrod (Vice President & General Manager, Dell Worldwide Server Solutions)21/05/2012, 11:30
-
Mr Vasco Chibante Barroso (CERN)21/05/2012, 13:30A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Since its successful start-up in 2010, the LHC has been performing outstandingly, providing to the experiments long periods of stable collisions and an integrated luminosity that greatly exceeds the...Go to contribution page
-
Pablo Saiz (CERN)21/05/2012, 13:30Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelAliEn is the GRID middleware used by the ALICE collaboration. It provides all the components that are needed to manage the distributed resources. AliEn is used for all the computing workflows of the experiment: Montecarlo production, data replication and reconstruction and organixed or chaotic user analysis. Moreover, AliEn is also being used by other experiments like PANDA and CBM. The...Go to contribution page
-
Vasil Georgiev Vasilev (CERN)21/05/2012, 13:30Software Engineering, Data Stores and Databases (track 5)ParallelCling (http://cern.ch/cling) is a C++ interpreter, built on top of clang (http://clang.llvm.org) and LLVM (http://llvm.org). Like its predecessor CINT, cling offers an interactive, terminal-like prompt. It enables exploratory programming with rapid edit / run cycles. The ROOT team has more than 15 years of experience with C++ interpreters, and this has been fully exploited in the design of...Go to contribution page
-
Dr Domenico Vicinanza (DANTE)21/05/2012, 13:30Computer Facilities, Production Grids and Networking (track 4)ParallelThe Large Hadron Collider (LHC) is currently running at CERN in Geneva, Switzerland. Physicists are using LHC to recreate the conditions just after the Big Bang, by colliding two beams of particles and heavy ions head-on at very high energy. The project is expected to generate 27 TB of raw data per day, plus 10 TB of "event summary data". This data is sent out from CERN to eleven Tier 1...Go to contribution page
-
Steven Goldfarb (University of Michigan (US))21/05/2012, 13:30Og, commonly recognized as one of the earliest contributors to experimental particle physics, began his career by smashing two rocks together, then turning to his friend Zog and stating those famous words “oogh oogh”. It was not the rock-smashing that marked HEP’s origins, but rather the sharing of information, which then allowed Zog to confirm the important discovery, that rocks are indeed...Go to contribution page
-
David Michael Rohr (Johann-Wolfgang-Goethe Univ. (DE))21/05/2012, 13:55The ALICE High Level Trigger (HLT) is capable of performing an online reconstruction of heavy-ion collisions. The reconstruction of particle trajectories in the Time Projection Chamber (TPC) is the most compute intensive step. The TPC online tracker implementation combines the principle of the cellular automaton and the Kalman filter. It has been accelerated by the usage of graphics cards...Go to contribution page
-
Dr Maria Grazia Pia (Universita e INFN (IT))21/05/2012, 13:55Publications in scholarly journals establish the body of knowledge deriving from scientific research; they also play a fundamental role in the career path of scientists and in the evaluation criteria of funding agencies. This presentation reviews the evolution of computing-oriented publications in HEP following the start of operation of LHC. Quantitative analyses are illustrated, which...Go to contribution page
-
Edoardo Martelli (CERN)21/05/2012, 13:55Computer Facilities, Production Grids and Networking (track 4)ParallelThe much-heralded exhaustion of the IPv4 networking address space has finally started. While many of the research and education networks have been ready and poised for years to carry IPv6 traffic, there is a well-known lack of academic institutes using the new protocols. One reason for this is an obvious absence of pressure due to the extensive use of NAT or that most currently still have...Go to contribution page
-
Roberto Agostino Vitillo (LBNL)21/05/2012, 13:55Software Engineering, Data Stores and Databases (track 5)ParallelModern superscalar, out-of-order microprocessors dominate large scale server computing. Monitoring their activity, during program execution, has become complicated due to the complexity of the microarchitectures and their IO interactions. Recent processors have thousands of performance monitoring events. These are required to actually provide coverage for all of the complex interactions and...Go to contribution page
-
Vincent Garonne (CERN)21/05/2012, 13:55Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe ATLAS collaboration has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Together with experimental data generated from RAW and complimentary simulation data, and accounting for data replicas on the grid, a total of 74TB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management...Go to contribution page
-
Pedro Ferreira (CERN)21/05/2012, 14:20Since 2009, the development of Indico has focused on usability, performance and new features, especially the ones related to meeting collaboration. Usability studies have resulted in the biggest change Indico has experienced up to now, a new web layout that makes the user experience better. Performance improvements were also a key goal since 2010; the main features of Indico have been...Go to contribution page
-
Frederik Beaujean (Max Planck Institute for Physics)21/05/2012, 14:20Software Engineering, Data Stores and Databases (track 5)PosterThe Bayesian Analysis Toolkit (BAT) is a C++ library designed to analyze data through the application of Bayes' theorem. For parameter inference, it is necessary to draw samples from the posterior distribution within the given statistical model. At its core, BAT uses an adaptive Markov Chain Monte Carlo (MCMC) algorithm. As an example of a challenging task, we consider the analysis of...Go to contribution page
-
Diego Casadei (New York University (US))21/05/2012, 14:20The ATLAS trigger has been used very successfully to collect collision data during 2009-2011 LHC running at centre of mass energies between 900 GeV and 7 TeV. The three-level trigger system reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 300 Hz. The first level uses custom electronics to reject most background collisions, in less...Go to contribution page
-
Mr Andrey Bobyshev (FERMILAB)21/05/2012, 14:20Computer Facilities, Production Grids and Networking (track 4)ParallelThe LHC is entering its fourth year of production operation. Many Tier1 facilities can count up to a decade of existence when development and ramp-up efforts are included. LHC computing has always been heavily dependent on high capacity, high performance network facilities for both the LAN and WAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to...Go to contribution page
-
Dr Stuart Wakefield (Imperial College London)21/05/2012, 14:20Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelCMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and monte carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer...Go to contribution page
-
Dr Giuseppe Avolio (University of California Irvine (US))21/05/2012, 14:45The Trigger and DAQ (TDAQ) system of the ATLAS experiment is a very complex distributed computing system, composed of O(10000) of applications running on more than 2000 computers. The TDAQ Controls system has to guarantee the smooth and synchronous operations of all TDAQ components and has to provide the means to minimize the downtime of the system caused by runtime failures, which are...Go to contribution page
-
Maria Alandes Pradillo (CERN)21/05/2012, 14:45Software Engineering, Data Stores and Databases (track 5)ParallelThe EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify...Go to contribution page
-
Jan Iven (CERN), Massimo Lamanna (CERN)21/05/2012, 14:45Computer Facilities, Production Grids and Networking (track 4)ParallelLarge-volume physics data storage at CERN is based on two services, CASTOR and EOS: * CASTOR - in production for many years - now handles the Tier0 activities (including WAN data distribution), as well as all tape-backed data; * EOS - in production since 2011 - supports the fast-growing need for high-performance low-latency (i.e. diskonly) data access for user analysis. In 2011, a large...Go to contribution page
-
Philippe Charpentier (CERN)21/05/2012, 14:45Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe LHCb Data Management System is based on the DIRAC Grid Community Solution. LHCbDirac provides extensions to the basic DMS such as a Bookkeeping System. Datasets are defined as sets of files corresponding to a given query in the Bookkeeping system. Datasets can be manipulated by CLI tools as well as by automatic transformations (removal, replication, processing). A dynamic handling of...Go to contribution page
-
Ludmila Marian (CERN)21/05/2012, 14:45In this talk, we will explain how CERN digital library services have evolved to deal with the publication of the first results of the LHC. We will describe the work-flow of the documents on CERN Document Server and the diverse constraints relative to this work-flow. We will also give an overview on how the underlying software, Invenio, has been enriched to cope with special needs. In a...Go to contribution page
-
Lucas Taylor (Fermi National Accelerator Lab. (US))21/05/2012, 15:10The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services and more than 100,000 documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and...Go to contribution page
-
Rapolas Kaselis (Vilnius University (LT))21/05/2012, 15:10Computer Facilities, Production Grids and Networking (track 4)ParallelCMS experiment possesses distributed computing infrastructure and its performance heavily depends on the fast and smooth distribution of data between different CMS sites. Data must be transferred from the Tier-0 (CERN) to the Tier-1 for storing and archiving, and time and good quality are vital to avoid overflowing CERN storage buffers. At the same time, processed data has to be distributed...Go to contribution page
-
Mrs Jianlin Zhu (Huazhong Normal University (CN))21/05/2012, 15:10Software Engineering, Data Stores and Databases (track 5)ParallelThe Data-Acquisition System designed by ALICE , which is the experiment dedicated to the study of strongly interacting matter and the quark-gluon plasma at the CERN LHC(Large Hadron Collider), handles the data flow from the sub-detector electronics to the archiving on tape. The software framework of the ALICE data-acquisition system is called DATE (ALICE Data Acquisition and Test Environment)...Go to contribution page
-
Andrea Negri (Universita e INFN (IT))21/05/2012, 15:10The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data at unprecedented energy and rates. The TDAQ is composed of three levels which reduces the event rate from the design bunch-crossing rate of 40 MHz to an average event recording rate of about 200 Hz. The...Go to contribution page
-
110. The “Common Solutions" Strategy of the Experiment Support group at CERN for the LHC ExperimentsDr Maria Girone (CERN)21/05/2012, 15:10Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelAfter two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing as well as WLCG deployment and operations need to evolve. As part of the activities of the Experiment Support group in CERN’s IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common...Go to contribution page
-
Dr Andreas Pfeiffer (CERN)21/05/2012, 16:35Software Engineering, Data Stores and Databases (track 5)ParallelThe CMS experiment is made of many detectors which in total sum up to more than 75 million channels. The online database stores the configuration data used to configure the various parts of the detector and bring it in all possible running states. The database also stores the conditions data, detector monitoring parameters of all channels (temperatures, voltages), detector quality information,...Go to contribution page
-
Shawn Mc Kee (University of Michigan (US))21/05/2012, 16:35Computer Facilities, Production Grids and Networking (track 4)ParallelGlobal scientific collaborations, such as ATLAS, continue to push the network requirements envelope. Data movement in this collaboration is projected to include the regular exchange of petabytes of datasets between the collection and analysis facilities in the coming years. These requirements place a high emphasis on networks functioning at peak efficiency and availability; the lack thereof...Go to contribution page
-
Andrew John Washbrook (University of Edinburgh (GB))21/05/2012, 16:35Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelAthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. This has now been validated for production and delivers a significant reduction on overall memory footprint with negligible CPU overhead. Before AthenaMP can be routinely run on the LHC Computing Grid, it must be determined how the...Go to contribution page
-
Hannes Sakulin (CERN)21/05/2012, 16:35The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100...Go to contribution page
-
Dr Christopher Jones (Fermi National Accelerator Lab. (US))21/05/2012, 16:35Traditionally, HEP experiments exploit the multiple cores in a CPU by having each core process one event. However, future PC designs are expected to use CPUs which double the number of processing cores at the same rate as the cost of memory falls by a factor of two. This effectively means the amount of memory per processing core will remain constant. This is a major challenge for LHC...Go to contribution page
-
Niko Neufeld (CERN)21/05/2012, 17:00Computer Facilities, Production Grids and Networking (track 4)ParallelThe upgraded LHCb experiment, which is supposed to go into operation in 2018/19 will require a massive increase in its compute facilities. A new 2 MW data-centre is planned at the LHCb site. Apart from the obvious requirement of minimizing the cost, the data-centre has to tie in well with the needs of online processing, while at the same time staying open for future and offline use. We present...Go to contribution page
-
Dr David Malon (Argonne National Laboratory (US))21/05/2012, 17:00Software Engineering, Data Stores and Databases (track 5)ParallelThe volume and diversity of metadata in an experiment of the size and scope of ATLAS is considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. Trigger information and data from the Large Hadron Collider itself provide cases in point, but examples abound. Metadata about logical or physics constructs, such...Go to contribution page
-
Dr Jose Hernandez Calama (Centro de Investigaciones Energ. Medioambientales y Tecn. - (ES)21/05/2012, 17:00Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelCommodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively...Go to contribution page
-
Dr Marc Paterno (Fermilab)21/05/2012, 17:00Future "Intensity Frontier" experiments at Fermilab are likely to be conducted by smaller collaborations, with fewer scientists, than is the case for recent "Energy Frontier" experiments. *art* is an event-processing framework designed with the needs of such experiments in mind. The authors have been involved with the design and implementation of frameworks for several experiments,...Go to contribution page
-
Andrea Petrucci (CERN)21/05/2012, 17:00The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC restarts after the 2013/14 shut-down, the current compute nodes and networking infrastructure will have reached the end of their lifetime. We are presenting design studies for an...Go to contribution page
-
Dave Dykstra (Fermi National Accelerator Lab. (US))21/05/2012, 17:25Software Engineering, Data Stores and Databases (track 5)ParallelNon-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability...Go to contribution page
-
Dr Horst Göringer (GSI)21/05/2012, 17:25Computer Facilities, Production Grids and Networking (track 4)ParallelGSI in Darmstadt (Germany) is a center for heavy ion research. It hosts an Alice Tier2 center and is the home of the future FAIR facility. The planned data rates of the largest FAIR experiments, CBM and Panda, will be similar to those of the current LHC experiments at Cern. gStore is a hierarchical storage system with unique name space and successfully in operation since more than...Go to contribution page
-
Anar Manafov (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))21/05/2012, 17:25Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelPROOF on Demand (PoD) is a tool-set, which dynamically sets up a PROOF cluster at a user’s request on any resource management system (RMS). It provides a plug-in based system, in order to use different job submission front-ends. PoD is currently shipped with gLite, LSF, PBS (PBSPro/OpenPBS/Torque), Grid Engine (OGE/SGE), Condor, LoadLeveler, and SSH plug-ins. It makes it possible just within...Go to contribution page
-
Wahid Bhimji (University of Edinburgh (GB))21/05/2012, 17:25We detail recent changes to ROOT-based I/O within the ATLAS experiment. The ATLAS persistent event data model continues to make considerable use of a ROOT I/O backend through POOL persistency. Also ROOT is used directly in later stages of analysis that make use of a flat-ntuple based "D3PD" data-type. For POOL/ROOT persistent data, several improvements have been made including implementation...Go to contribution page
-
Robert Gomez-Reino Garrido (CERN)21/05/2012, 17:25The Compact Muon Solenoid (CMS) is a CERN multi-purpose experiment that exploits the physics of the Large Hadron Collider (LHC). The Detector Control System (DCS) ensures a safe, correct and efficient experiment operation, contributing to the recording of high quality physics data. The DCS is programmed to automatically react to the LHC changes. CMS sub-detector’s bias voltages are set...Go to contribution page
-
Mario Lassnig (CERN)21/05/2012, 17:50Software Engineering, Data Stores and Databases (track 5)ParallelThe Distributed Data Management System DQ2 is responsible for the global management of petabytes of ATLAS physics data. DQ2 has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle, as RDBMS are well suited to enforce data integrity in online transaction processing application. Despite these advantages, concerns have been raised recently on the scalability of...Go to contribution page
-
Dmitry Ozerov (Deutsches Elektronen-Synchrotron (DE)), Martin Gasthuber (Deutsches Elektronen-Synchrotron (DE)), Patrick Fuhrmann (DESY), Yves Kemp (Deutsches Elektronen-Synchrotron (DE))21/05/2012, 17:50Computer Facilities, Production Grids and Networking (track 4)ParallelWe present results on different approaches on mounted filesystems in use or under investigation at DESY. dCache, established since long as a storage system for physics data has implemented the NFS v4.1/pNFS protocol. New performance results will be shown with the most current version of the dCache server. In addition to the native usage of the mounted filesystem in a LAN environment, the...Go to contribution page
-
Peter Van Gemmeren (Argonne National Laboratory (US))21/05/2012, 17:50A critical component of any multicore/manycore application architecture is the handling of input and output. Even in the simplest of models, design decisions interact both in obvious and in subtle ways with persistence strategies. When multiple workers handle I/O independently using distinct instances of a serial I/O framework, for example, it may happen that because of the way data from...Go to contribution page
-
Luis Granado Cardoso (CERN)21/05/2012, 17:50Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelLHCb is one of the 4 experiments at the LHC accelerator at CERN. LHCb has approximately 1600 (8 cores) PCs for processing the High Level Trigger (HLT) during physics data acquisition. During periods when data acquisition is not required or the resources needed for data acquisition are reduced, like accelerator Machine Development (MD) periods or technical shutdowns, most of these PCs are idle...Go to contribution page
-
Mariusz Witek (Polish Academy of Sciences (PL))21/05/2012, 17:50The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but disk space limitations mean that only 3 kHz can be written to tape for offline processing. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. In contrast to...Go to contribution page
-
Markus Klute (Massachusettes Institute of Technology)22/05/2012, 08:30
-
Ian Fisk (Fermi National Accelerator Lab. (US))22/05/2012, 09:00
-
Dr Oxana Smirnova (LUND UNIVERSITY)22/05/2012, 09:30
-
Sebastien Goasguen (Clemson University)22/05/2012, 10:30From Grid to Cloud: A PerspectiveGo to contribution page
-
Mr Jeff Hammerbacher (Cloudera)22/05/2012, 11:00
-
Lennart Johnsson (Unknown)22/05/2012, 11:30
-
Dr Torsten Antoni (KIT - Karlsruhe Institute of Technology (DE))22/05/2012, 13:30After a long period of project-based funding,during which the improvement of the services provided to the user communities was the main focus, distributed computing infrastructures (DCIs), having reached and established production quality, now need to tackle the issue of long-term sustainability. With the transition from EGEE to EGI in 2010 the major part of the responsibility (especially...Go to contribution page
-
Marco Clemencic (CERN)22/05/2012, 13:30Software Engineering, Data Stores and Databases (track 5)ParallelThe LHCb experiment has been using the CMT build and configuration tool for its software since the first versions, mainly because of its multi-platform build support and its powerful configuration management functionality. Still, CMT has some limitations in terms of build performance and the increased complexity added to the tool to cope with new use cases added latterly. Therefore, we have...Go to contribution page
-
Ramiro Voicu (California Institute of Technology (US))22/05/2012, 13:30Current network technologies like dynamic network circuits and emerging protocols like OpenFlow, enable the network as an active component in the context of data transfers. We present framework which provides a simple interface for scientists to move data between sites over Wide Area Network with bandwidth guarantees. Although the system hides the complexity from the end users, it was...Go to contribution page
-
Marco Bencivenni (INFN)22/05/2012, 13:30One of the main barriers against Grid widespread adoption in scientific communities stems from the intrinsic complexity of handling X.509 certificates, which represent the foundation of the Grid security stack. To hide this complexity, in recent years, several Grid portals have been proposed which, however, do not completely solve the problem, either requiring that users manage their own...Go to contribution page
-
Daniele Spiga (CERN), Hassen Riahi (Universita e INFN (IT)), Mattia Cinquilli (Univ. of California San Diego (US))22/05/2012, 13:30The CMS distributed data analysis workflow assumes that jobs run in a different location to where their results are finally stored. Typically the user output must be transferred across the network from one site to another, possibly on a different continent or over links not necessarily validated for high bandwidth/high reliability transfer. This step is named stage-out and in CMS was...Go to contribution page
-
Andrea Cristofori (INFN-CNAF, IGI)22/05/2012, 13:30The accounting activity in a production computing Grid is of paramount importance in order to understand the utilization of the available resources. While several CPU accounting systems are deployed within the European Grid Infrastructure (EGI), storage accounting systems, that are stable enough to be adopted on a production environment, are not yet available. A growing interest is being...Go to contribution page
-
Costin Grigoras (CERN)22/05/2012, 13:30Since the ALICE experiment began data taking in late 2009, the amount of end user jobs on the AliEn Grid has increased significantly. Presently 1/3 of the 30K CPU cores available to ALICE are occupied by jobs submitted by about 400 distinct users. The overall stability of the AliEn middleware has been excellent throughout the 2 years of running, but the massive amount of end-user analysis and...Go to contribution page
-
Rapolas Kaselis (Vilnius University (LT))22/05/2012, 13:30The goal for CMS computing is to maximise the throughput of simulated event generation while also processing the real data events as quickly and reliably as possible. To maintain this achievement as the quantity of events increases, since the beginning of 2011 CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent...Go to contribution page
-
Dr Alex Martin (QUEEN MARY, UNIVERSITY OF LONDON), Christopher John Walker (University of London (GB))22/05/2012, 13:30We describe a low-cost Petabyte scale Lustre filesystem deployed for High Energy Physics. The use of commodity storage arrays and bonded ethernet interconnects makes the array cost effective, whilst providing high bandwidth to the storage. The filesystem is a POSIX filesytem, presented to the Grid using the StoRM SRM. The system is highly modular. The building blocks of the array, the...Go to contribution page
-
Sergey Panitkin (Brookhaven National Laboratory (US))22/05/2012, 13:30In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis and discuss changes that...Go to contribution page
-
Ms qiulan huang (Institute of High Energy Physics, Beijing)22/05/2012, 13:30Entering information industry, the most new technologies talked about are virtualization and cloud computing. Virtualization makes the heterogeneous resources transparent to users, and plays a huge role in large-scale data center management solutions. Cloud computing emerges as a revolution in computing science which bases on virtualization, demonstrating a gigantic advantage in resource...Go to contribution page
-
Alexey Anisenkov (Budker Institute of Nuclear Physics (RU))22/05/2012, 13:30The ATLAS Grid Information System (AGIS) centrally stores and exposes static, dynamic and configuration parameters required to configure and to operate ATLAS distributed computing systems and services. AGIS is designed to integrate information about resources, services and topology of the ATLAS grid infrastructure from various independent sources including BDII, GOCDB, the ATLAS data...Go to contribution page
-
Zdenek Maxa (California Institute of Technology (US))22/05/2012, 13:30WMAgent is the core component of the CMS workload management system. One of the features of this job managing platform is a configurable messaging system aimed at generating, distributing and processing alerts: short messages describing a given alert-worthy informational or pathological condition. Apart from the framework's sub-components running within the WMAgent instances, there is a...Go to contribution page
-
Dr Christopher Jung (KIT - Karlsruhe Institute of Technology (DE))22/05/2012, 13:30The GridKa center at the Karlsruhe Institute for Technology is the largest ALICE Tier-1 center. It hosts 40,000 HEPSEPC'06, approximately 2.75 PB of disk space and 5.25 PB of tape space for for A Large Ion Collider Experiment (ALICE), at the CERN LHC. These resources are accessed via the AliEn middleware. The storage is divided into two instances, both using the storage middleware xrootd. We...Go to contribution page
-
Pablo Saiz (CERN)22/05/2012, 13:30The AliEn workload management system is based on a central job queue wich holds all tasks that have to be executed. The job brokering model itself is based on pilot jobs: the system submits generic pilots to the compuiting centres batch gateways, and the assignment of a real job is done only when the pilot wakes up on the worker node. The model facilitates a flexible fair share user job...Go to contribution page
-
Dr Dagmar Adamova (Nuclear Physics Institute of the AS CR Prague/Rez), Mr Jiri Horky (Institute of Physics of the AS CR Prague)22/05/2012, 13:30ALICE, as well as the other experiments at the CERN LHC, has been building a distributed data management infrastructure since 2002. Experience gained during years of operations with different types of storage managers deployed over this infrastructure has shown that the most adequate storage solution for ALICE is the native XRootD manager developed within a CERN - SLAC collaboration. The...Go to contribution page
-
Wahid Bhimji (University of Edinburgh (GB))22/05/2012, 13:30Computer Facilities, Production Grids and Networking (track 4)ParallelWe describe recent I/O testing frameworks that we have developed and applied within the UK GridPP Collaboration, the ATLAS experiment and the DPM team, for a variety of distinct purposes. These include benchmarking vendor supplied storage products, discovering scaling limits of SRM solutions, tuning of storage systems for experiment data analysis, evaluating file access protocols, and...Go to contribution page
-
Laura Tosoratto (INFN)22/05/2012, 13:30The emerging of hybrid GPU-accelerated clusters in the supercomputing landscape is a matter of fact. In this framework we proposed a new INFN initiative, the QUonG project, aiming to deploy a high performance computing system dedicated to scientific computations leveraging on commodity multi-core processors coupled with last generation GPUs. The multi-node interconnection system is based on...Go to contribution page
-
Mr Martin Gasthuber (Deutsches Elektronen-Synchrotron (DE))22/05/2012, 13:30DESY has started to deploy modern, state of the art, industry based, scale out file services together with certain extension as a key component in dedicated LHC analysis environments like the National Analysis Facility (NAF) @DESY. In a technical cooperation with IBM, we will add identified critical features to the standard SONAS product line of IBM to make the system best suited for the...Go to contribution page
-
Sergey Kalinin (Bergische Universitaet Wuppertal (DE))22/05/2012, 13:30The Job Execution Monitor (JEM), a job-centric grid job monitoring software, is actively developed at the University of Wuppertal. It leverages Grid-based physics analysis and Monte Carlo event production for the ATLAS experiment by monitoring job progress and grid worker node health. Using message passing techniques, the gathered data can be supervised in real time by users, site admins and...Go to contribution page
-
Luisa Arrabito (IN2P3/LUPM on behalf of the CTA Consortium)22/05/2012, 13:30The Cherenkov Telescope Array (CTA) – an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale – is the next generation instrument in the field of very high energy gamma-ray astronomy. CTA will operate as an open observatory providing data products to the scientific community. An average data stream of some GB/s for about 1000 hours of observation...Go to contribution page
-
Mikhail Titov (University of Texas at Arlington (US))22/05/2012, 13:30Efficient distribution of physics data over ATLAS grid sites is one of the most important tasks for user data processing. ATLAS' initial static data distribution model over-replicated some unpopular data and under-replicated popular data, creating heavy disk space loads while under-utilizing some processing resources due to low data availability. Thus, a new data distribution mechanism was...Go to contribution page
-
Jaroslava Schovancova (Acad. of Sciences of the Czech Rep. (CZ))22/05/2012, 13:30This talk details variety of Monitoring tools used within the ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the Tier-0 facility at CERN after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centers distributed world-wide. We present an...Go to contribution page
-
Graeme Andrew Stewart (CERN), Dr Stephane Jezequel (LAPP)22/05/2012, 13:30This paper will summarize operational experience and improvements in ATLAS computing infrastructure during 2010 and 2011. ATLAS has had 2 periods of data taking, with many more events recorded in 2011 than in 2010. It ran 3 major reprocessing campaigns. The activity in 2011 was similar to that in 2010, but scalability issues had to be adressed due to the increase in luminosity and trigger...Go to contribution page
-
Jaroslava Schovancova (Acad. of Sciences of the Czech Rep. (CZ))22/05/2012, 13:30ATLAS Distributed Computing organized 3 teams to support data processing at Tier-0 facility at CERN, data reprocessing, data management operations, Monte Carlo simulation production, and physics analysis at the ATLAS computing centers located world-wide. In this talk we describe how these teams ensure that the ATLAS experiment data is delivered to the ATLAS physicists in a timely manner in the...Go to contribution page
-
Danila Oleynik (Joint Inst. for Nuclear Research (RU))22/05/2012, 13:30The ATLAS Distributed Data Management project DQ2 is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. The DQ2 deletion service is one of the most important DDM services. This distributed service interacts with 3rd...Go to contribution page
-
Pavel Nevski (Brookhaven National Laboratory (US))22/05/2012, 13:30The production system for Grid Data Processing (GDP) handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge...Go to contribution page
-
Laura Sargsyan (A.I. Alikhanyan National Scientific Laboratory (AM))22/05/2012, 13:30Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from the Panda job processing...Go to contribution page
-
Danila Oleynik (Joint Inst. for Nuclear Research (RU))22/05/2012, 13:30The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to...Go to contribution page
-
Collaboration Atlas (Atlas)22/05/2012, 13:30The ATLAS Distributed Computing (ADC) project delivers production quality tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with large contingency the needed computing activities in the first years of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges....Go to contribution page
-
Mr Erekle Magradze (Georg-August-Universitaet Goettingen (DE))22/05/2012, 13:30The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses SSB for the distributed computing shifts, for estimating data processing and...Go to contribution page
-
Mr James Pryor (Brookhaven National Laboratory)22/05/2012, 13:30Cobbler is a network-based Linux installation server, which, via a choice of web or CLI tools, glues together PXE/DHCP/TFTP and automates many associated deployment tasks. It empowers a facility's systems administrators to write scriptable and modular code, which can pilot the OS installation routine to proceed unattended and automatically, even across heterogeneous hardware. These tools make...Go to contribution page
-
Dr Jose Caballero Bejar (Brookhaven National Laboratory (US))22/05/2012, 13:30The ATLAS experiment at the CERN LHC is one of the largest users of grid computing infrastructure, which is a central part of the experiment's computing operations. Considerable efforts have been made to use grid technology in the most efficient and effective way, including the use of a pilot job based workload management framework. In this model the experiment submits 'pilot' jobs to sites...Go to contribution page
-
Dr Xiaomei Zhang (IHEP, China)22/05/2012, 13:30A job submission and management tool is one of the necessary components in any distributed computing system. Such a tool should provide a user-friendly interface for physics production group and ordinary analysis users to access heterogeneous computing resources, without requiring knowledge of the underlying grid middleware. Ganga, with its common framework and customizable plug-in structure,...Go to contribution page
-
Paul Rossman (Fermi National Accelerator Laboratory (FNAL))22/05/2012, 13:30In addition to the physics data generated each day from the CMS detector, the experiment also generates vast quantities of supplementary log data. From reprocessing logs to transfer logs this data could shed light on operational issues and assist with reducing inefficiencies and eliminating errors if properly stored, aggregated and analyzed. The term "big data" has recently taken the spotlight...Go to contribution page
-
Alvaro Gonzalez Alvarez (CERN)22/05/2012, 13:30Since a couple of years, a team at CERN and partners from the Citizen Cyberscience Centre (CCC) have been working on a project that enables general physics simulation programs to run in a virtual machine on volunteer PCs around the world. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework. Based on CERNVM and the job management framework Co-Pilot, this...Go to contribution page
-
Dr David Crooks (University of Glasgow/GridPP)22/05/2012, 13:30This presentation will cover the work conducted within the ScotGrid Glasgow Tier-2 site. It will focus on the multi-tiered network security architecture developed on the site to augment Grid site server security and will discuss the variety of techniques used including the utilisation of Intrusion Detection systems, logging and optimising network connectivity within the...Go to contribution page
-
Martin Sevior (University of Melbourne (AU))22/05/2012, 13:30The experimental high energy physics group at the University of Melbourne is a member of the ATLAS, Belle and Belle II collaborations. We maintain a local data centre which enables users to test pre-production code and to do final stage data analysis. Recently the Australian National eResearch Collaboration Tools and Resources (NeCTAR) organisation implemented a Research Cloud based on...Go to contribution page
-
Giacinto Donvito (Universita e INFN (IT))22/05/2012, 13:30A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, using a grant from the Italian Ministry of Research. The Consortium aims to the realization of an ad-hoc infrastructure to ease the analysis activities on the huge data set collected by the CMS Experiment, at the LHC...Go to contribution page
-
Derek John Weitzel (University of Nebraska (US))22/05/2012, 13:30It is common at research institutions to maintain multiple clusters that represent different owners or generations of hardware, or that fulfill different needs and policies. Many of these clusters are consistently under utilized while researchers on campus could greatly benefit from these unused capabilities. By leveraging principles from the Open Science Grid it is now possible to utilize...Go to contribution page
-
Georgiana Lavinia Darlea (Polytechnic University of Bucharest (RO))22/05/2012, 13:30In the ATLAS Online computing farm, the majority of the systems are network booted - they run an operating system image provided via network by a Local File Server. This method guarantees the uniformity of the farm and allows very fast recovery in case of issues to the local scratch disks. The farm is not homogeneous and in order to manage the diversity of roles, functionality and hardware of...Go to contribution page
-
Artem Harutyunyan (CERN)22/05/2012, 13:30Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelCernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of distributed computing resources. Such resources include commercial computing clouds (e.g. Amazon EC2), scientific computing clouds (e.g. CERN lxcloud), as well as the machines of users participating in volunteer computing projects (e.g. BOINC). The framework consists of components that communicate...Go to contribution page
-
Mr Steffen Schreiner (CERN, CASED/TU Darmstadt)22/05/2012, 13:30Grid computing infrastructures need to provide traceability and accounting of their users’ activity and protection against misuse and privilege escalation, where the delegation of privileges in the course of a job submission is a key concern. This work describes an improved handling of multi-user Grid jobs in the ALICE Grid Services. A security analysis of the ALICE Grid job model is...Go to contribution page
-
Neng Xu (University of Wisconsin (US))22/05/2012, 13:30With the start-up of the LHC in 2009, more and more data analysis facilities have been built or enlarged at Universities and laboratories. In the mean time, new technologies, like Cloud computing and Web3D, and new types of hardware, like smartphones and tablets, have become available and popular in the market. Is there a way to integrate them into the existing data analysis models and allow...Go to contribution page
-
Prof. Sudhir Malik (University of Nebraska-Lincoln)22/05/2012, 13:30The CMS Analysis Tools model has now been used robustly in a plethora of physics papers. This model is examined to investigate successes and failures as seen by the analysts of recent papers.Go to contribution page
-
Kenneth Bloom (University of Nebraska (US))22/05/2012, 13:30After years of development, the CMS distributed computing system is now in full operation. The LHC continues to set records for instantaneous luminosity, and CMS records data at 300 Hz. Because of the intensity of the beams, there are multiple proton-proton interactions per beam crossing, leading to larger and larger event sizes and processing times. The CMS computing system has responded...Go to contribution page
-
Pablo Saiz (CERN)22/05/2012, 13:30Collaborative development proved to be a key of the success of the Dashboard Site Status Board (SSB) which is heavily used by ATLAS and CMS for the computing shifts and site commissioning activities. The Dashboard Site Status Board (SSB) is an application that enables Virtual Organisation (VO) administrators to monitor the status of distributed sites. The selection, significance and...Go to contribution page
-
Boris Wagner (University of Bergen (NO))22/05/2012, 13:30The Nordic Tier-1 for LHC is distributed over several, sometimes smaller, computing centers. In order to minimize administration effort, we are interested in running different grid jobs over one common grid middleware. ARC is selected as the internal middleware in the Nordic Tier-1. At the moment ARC has no mechanism of automatic software packaging and deployment. The AliEn grid middleware,...Go to contribution page
-
Niko Neufeld (CERN), Vijay Kartik Subbiah (CERN)22/05/2012, 13:30This paper describes the investigative study undertaken to evaluate shared filesystem performance and suitability in the LHCb Online environment. Particular focus is given to the measurements and field tests designed and performed on an in-house AFS setup, and related comparisons with NFSv3 and pNFS are presented. The motivation for the investigation and the test setup arises from the need to...Go to contribution page
-
Andreas Heiss (KIT - Karlsruhe Institute of Technology (DE))22/05/2012, 13:30GridKa, operated by the Steinbuch Centre for Computing at KIT, is the German regional centre for high energy and astroparticle physics computing, supporting currently 10 experiments and serving as a Tier-1 centre for the four LHC experiments. Since the beginning of the project in 2002, the total compute power is upgraded at least once per year to follow the increasing demands of the...Go to contribution page
-
Robert Snihur (University of Nebraska (US))22/05/2012, 13:30There are approximately 60 Tier-3 computing sites located on campuses of collaborating institutions in CMS. We describe the function and architecture of these sites, and illustrate the range of hardware and software options. A primary purpose is to provide a platform for local users to analyze LHC data, but they are also used opportunistically for data production. While Tier-3 sites vary...Go to contribution page
-
Anar Manafov (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))22/05/2012, 13:30Constant changes in computational infrastructure like the current interest in Clouds, imply conditions on the design of applications. We must make sure that our analysis infrastructure, including source code and supporting tools, is ready for the on demand computing (ODC) era. This presentation is about a new analysis concept, which is driven by users needs, completely disentangled from...Go to contribution page
-
Dimitri Nilsen (Karlsruhe Institute of Technology (KIT)), Dr Pavel Weber (Karlsruhe Institute of Technology (KIT))22/05/2012, 13:30GridKa is a computing centre located in Karlsruhe. It serves as Tier-1 centre for the four LHC experiments and also provides its computing and storage resources for other non-LHC HEP and astroparticle physics experiments as well as for several communities of the German Grid Initiative D-Grid. The middleware layer at GridKa comprises three main flavours: Globus, gLite and UNICORE. This...Go to contribution page
-
Elisa Lanciotti (CERN)22/05/2012, 13:30In the distributed computing model of WLCG Grid Storage Elements (SE) are by construction completely decoupled from the File Catalogs (FC) where the experiment's files are registered. On the basis of the experience of managing large volumes of data in such environment, inconsistencies have often happened either causing a waste of disk space, in case the data were deleted from the FC, but still...Go to contribution page
-
Mr Igor Sfiligoi (University of California San Diego)22/05/2012, 13:30The CMS analysis computing model was always relying on jobs running near the data, with data allocation between CMS compute centers organized at management level, based on expected needs of the CMS community. While this model provided high CPU utilization during job run times, there were times when a large fraction of CPUs at certain sites were sitting idle due to lack of demand, all while...Go to contribution page
-
Daniele Spiga (CERN)22/05/2012, 13:30In CMS Computing the highest priorities for analysis tools are the improvement of the end users' ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as the data and simulation processing. This strategy foresees...Go to contribution page
-
Mr Massimo Sgaravatto (Universita e INFN (IT))22/05/2012, 13:30The European Middleware Initiative (EMI) project aims to deliver a consolidated set of middleware products based on the four major middleware providers in Europe - ARC, dCache, gLite and UNICORE. The CREAM (Computing Resource Execution And Management) Service, a service for job management operation at the Computing Element (CE) level, is one of the software product part of the EMI...Go to contribution page
-
Marco Caberletti (Istituto Nazionale Fisica Nucleare (IT))22/05/2012, 13:30The extensive use of virtualization technologies in cloud environments has created the need for a new network access layer residing on hosts and connecting the various Virtual Machines (VMs). In fact, massive deployment of virtualized environments imposes requirements on networking for which traditional models are not well suited. For example, hundreds of users issuing cloud requests for which...Go to contribution page
-
Dr Ivan Logashenko (Budker Institute Of Nuclear Physics)22/05/2012, 13:30Super Charm–Tau Factory (CTF) is a future electron-positron collider with center-of-mass energy range from 2 to 5 GeV and unprecedented for this energy range peak luminosity of about 10**35 cm−2s−1. The project of CTF is being developed in the Budker Institute of Nuclear Physics (Novosibirsk, Russia). The main goal of experiments at Super Charm-Tau Factory is a study of the processes with...Go to contribution page
-
Natalia Ratnikova (KIT - Karlsruhe Institute of Technology (DE))22/05/2012, 13:30All major experiments at Large Hadron Collider (LHC) need to measure real storage usage at the Grid sites. This information is equally important for the resource management, planning, and operations. To verify consistency of the central catalogs, experiments are asking sites to provide full list of files they have on storage, including size, checksum, and other file attributes. Such...Go to contribution page
-
Mr haifeng pi (CMS)22/05/2012, 13:30As part of the Advanced Networking Initiative (ANI) of ESnet, we exercise a prototype 100Gb network infrastructure for data transfer and processing for OSG HEP applications. We present results of these tests.Go to contribution page
-
Mr Andreas Petzold (KIT)22/05/2012, 13:30In 2012 the GridKa Tier-1 computing center hosts 130kHEPSPEC06 computing resources and 11PB disk and 17.7PB tape space. These resources are shared between the four LHC VOs and a number of national and international VOs from high energy physics and other sciences. CernVM-FS has been deployed at GridKa to supplement the existing NFS-based system to access VO software on the worker nodes. It...Go to contribution page
-
Dr Vincenzo Capone (Universita e INFN (IT))22/05/2012, 13:30Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of...Go to contribution page
-
Maxim Potekhin (Brookhaven National Laboratory (US))22/05/2012, 13:30For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the...Go to contribution page
-
Dr Giacinto Donvito (INFN-Bari)22/05/2012, 13:30The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. In this work we will present our...Go to contribution page
-
Dr Andrei Tsaregorodtsev (Universite d'Aix - Marseille II (FR))22/05/2012, 13:30File replica and metadata catalogs are essential parts of any distributed data management system, which are largely determining its functionality and performance. A new File Catalog (DFC) was developed in the framework of the DIRAC Project that combines both replica and metadata catalog functionality. The DFC design is based on the practical experience with the data management system of the...Go to contribution page
-
Adrian Casajus Ramo (University of Barcelona (ES))22/05/2012, 13:30The DIRAC framework for distributed computing has been designed as a flexible and modular solution that can be adapted to the requirements of any community. Users interact with DIRAC via command line, using the web portal or accessing resources via the DIRAC python API. The current DIRAC API requires users to use a python version valid for DIRAC. Some communities have developed their own...Go to contribution page
-
Artur Jerzy Barczyk (California Institute of Technology (US)), Ian Gable (University of Victoria (CA))22/05/2012, 13:30For the Super Computing 2011 conference in Seattle, Washington, a 100 Gb/s connection was established between the California Institute of Technology conference booth and the University of Victoria. A small team performed disk to disk data transfers between the two sites nearing 100 Gb/s, using only a small set of properly configured transfer servers equipped with SSD drives.The circuit...Go to contribution page
-
Johannes Elmsheuser (Ludwig-Maximilians-Univ. Muenchen (DE))22/05/2012, 13:30The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. To analyse these data the ATLAS experiment has developed and operates a mature and stable distributed analysis (DA) service on the Worldwide LHC Computing Grid. The service is actively used: more than 1400 users have submitted jobs in the year 2011 and a total of more 1 million...Go to contribution page
-
Wojciech Lapka (CERN)22/05/2012, 13:30The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO...Go to contribution page
-
Ricardo Brito Da Rocha (CERN)22/05/2012, 13:30The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During...Go to contribution page
-
Fabrizio Furano (CERN)22/05/2012, 13:30A number of storage elements now offer standard protocol interfaces like NFS 4.1/pNFS and WebDAV, for access to their data repositories, in line with the standardization effort of the European Middleware Initiative (EMI). Here we report on work which seeks to exploit the federation potential of these protocols and build a system which offers a unique view of the storage ensemble and the...Go to contribution page
-
Cinzia Luzzi (CERN - University of Ferrara)22/05/2012, 13:30The ALICE collaboration has developed a production environment (AliEn) that implements several components of the Grid paradigm needed to simulate, reconstruct and analyze data in a distributed way. In addition to the Grid-like analysis, ALICE, as many experiments, provides a local interactive analysis using the Parallel ROOT Facility (PROOF). PROOF is part of the ROOT analysis framework...Go to contribution page
-
Mr Maxim Grigoriev (Fermilab)22/05/2012, 13:30The LHC computing model relies on intensive network data transfers. The E-Center is a social collaborative web based platform for Wide Area network users. It is designed to give user all required tools to isolate, identify and resolve any network performance related problem.Go to contribution page
-
Cyril L'Orphelin (CNRS/IN2P3), Daniel Kouril (Unknown), Dr Mingchao Ma (STFC - Rutherford Appleton Laboratory)22/05/2012, 13:30The Operations Portal is a central service being used to support operations in the European Grid Infrastructure: a collaboration of National Grid Initiatives (NGIs) and several European International Research Organizations (EIROs). The EGI Operation Portal is providing a single access point to operational information gathered from various sources such as site topology database, monitoring...Go to contribution page
-
Emidlo Giorgio (Istituto Nazionale Fisica Nucleare (IT)), giuseppina salente (INFN)22/05/2012, 13:30The EMI project intends to receive or rent an exhibition spot nearby the main and visible areas of the event (such as coffee-break areas), to exhibit the projects goals and the latest achievements, such as the EMI1 release. The means used will be posters, video and distribution of flyers, sheets or brochures. It would be useful to have a 2x3 booth with panels available to post on posters, and...Go to contribution page
-
Jon Kerr Nilsen (University of Oslo (NO))22/05/2012, 13:30To manage data in the grid, with its jungle of protocols and enormous amount of data in different storage solutions, it is important to have a strong, versatile and reliable data management library. While there are several data management tools and libraries available, they all have different strengths and weaknesses, and it can be hard to decide which tool to use for which purpose. EMI is...Go to contribution page
-
Elisabetta Vilucchi (Istituto Nazionale Fisica Nucleare (IT)), Roberto Di Nardo (Istituto Nazionale Fisica Nucleare (IT))22/05/2012, 13:30In the ATLAS computing model, Tier2 resources are intended for MC productions and end-user analyses activities. These resources are usually exploited via the standard GRID resource management tools, which are de facto a high level interface to the underlying batch systems managing the contributing clusters. While this is working as expected, there are user-cases where a more dynamic usage of...Go to contribution page
-
Mr Mark Mitchell (University of Glasgow)22/05/2012, 13:30Due to the changes occurring within the IPv4 address space, the utilisation of IPv6 within Grid Technologies and other IT infrastructure is becoming a more pressing solution for IP addressing. The employment and deployment of this addressing scheme has been discussed widely both at the academic and commercial level for several years. The uptake is not as advanced as was predicted and the...Go to contribution page
-
Ms Silvia Amerio (University of Padova & INFN)22/05/2012, 13:30The CDF experiment at Fermilab ended its Run-II phase on September 2011 after 11 years of operations and 10 fb-1 of collected data. CDF computing model is based on a Central Analysis Farm (CAF) consisting of local computing and storage resources, supported by OSG and LCG resources accessed through dedicated portals. Recently a new portal, Eurogrid, has been developed to effectively...Go to contribution page
-
David Cameron (University of Oslo (NO))22/05/2012, 13:30Staging data to and from remote storage services on the Grid for users' jobs is a vital component of the ARC computing element. A new data staging framework for the computing element has recently been developed to address issues with the present framework, which has essentially remained unchanged since its original implementation 10 years ago. This new framework consists of an intelligent...Go to contribution page
-
Dr Andreas Peters (CERN)22/05/2012, 13:30EOS is a new disk based storage system used in production at CERN since autumn 2011. It is implemented using the plug-in architecture of the XRootD software framework and allows remote file access via XRootD protocol or POSIX-like file access via FUSE mounting. EOS was designed to fulfill specific requirements of disk storage scalability and IO scheduling performance for LHC analysis use...Go to contribution page
-
Tadashi Maeno (Brookhaven National Laboratory (US))22/05/2012, 13:30The PanDA Production and Distributed Analysis System plays a key role in the ATLAS distributed computing infrastructure. PanDA is the ATLAS workload management system for processing all Monte-Carlo simulation and data reprocessing jobs in addition to user and group analysis jobs. The system processes more than 5 million jobs in total per week, and more than 1400 users have submitted analysis...Go to contribution page
-
Claudio Grandi (INFN - Bologna)22/05/2012, 13:30The Computing Model of the CMS experiment was prepared in 2005 and described in detail in the CMS Computing Technical Design Report. With the experience of the first years of LHC data taking and with the evolution of the available technologies, the CMS Collaboration identified areas where improvements were desirable. In this work we describe the most important modifications that have been, or...Go to contribution page
-
Alexey Anisenkov (Budker Institute of Nuclear Physics (RU))22/05/2012, 13:30Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational...Go to contribution page
-
Simone Campana (CERN)22/05/2012, 13:30The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centers. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as...Go to contribution page
-
Adrian Casajus Ramo (University of Barcelona (ES))22/05/2012, 13:30DIRAC framework for distributed computing has been designed as a group of collaborating components, agents and servers, with persistent database back-end. Components communicate with each other using DISET, an in-house protocol that provides Remote Procedure Call (RPC) and file transfer capabilities. This approach has provided DIRAC with a modular and stable design by enforcing stable...Go to contribution page
-
Dr ziyan Deng (Institute of High Energy Physics, Beijing, China)22/05/2012, 13:30The BES III detector is a new spectrometer which works on the upgraded high-luminosity collider, the Beijing Electron-Positron Collider (BEPCII). The BES III experiment studies physics in the tau-charm energy region from 2GeV to 4.6GeV . Since spring 2009, BEPCII has produced large scale data samples. All the data samples were processed successfully and many important physics results have...Go to contribution page
-
Rodney Walker (Ludwig-Maximilians-Univ. Muenchen (DE))22/05/2012, 13:30Chirp is a distributed file system specifically designed for the wide area network, and developed by the University of Notre Dame CCL group. We describe the design features making it particularly suited to the Grid environment, and to ATLAS use cases. The deployment and usage within ATLAS distributed computing are discussed, together with scaling tests and evaluation for the various use cases.Go to contribution page
-
Diego Casadei (New York University (US))22/05/2012, 13:30After about two years of data taking with the ATLAS detector manifold experience with the custom-developed trigger monitoring and reprocessing infrastructure could be collected. The trigger monitoring can be roughly divided into online and offline monitoring. The online monitoring calculates and displays all rates at every level of the trigger and evaluates up to 3000 data quality...Go to contribution page
-
Pablo Saiz (CERN)22/05/2012, 13:30The Experiment Dashboard system provides common solutions for monitoring job processing, data transfers and site/service usability. Over the last seven years, it proved to play a crucial role in the monitoring of the LHC computing activities, distributed sites and services. It has been one of the key elements during the commissioning of the distributed computing systems of the LHC...Go to contribution page
-
Steven Timm (Fermilab)22/05/2012, 13:30FermiCloud is an Infrastructure-as-a-Service facility deployed at Fermilab based on OpenNebula that has been in production for more than a year. FermiCloud supports a variety of production services on virtual machines as well as hosting virtual machines that are used as development and integration platforms. This infrastructure has also been used as a testbed for commodity storage...Go to contribution page
-
Steven Timm (Fermilab)22/05/2012, 13:30FermiGrid is the facility that provides the Fermilab Campus Grid with unified job submission, authentication, authorization and other ancillary services for the Fermilab scientific computing stakeholders. We have completed a program of work to make these services resilient to high authorization request rates, as well as failures of building or network infrastructure. We will present...Go to contribution page
-
Dr Don Holmgren (Fermilab)22/05/2012, 13:30As part of the DOE LQCD-ext project, Fermilab designs, deploys, and operates dedicated high performance clusters for parallel lattice QCD (LQCD) computations. Multicore processors benefit LQCD simulations and have contributed to the steady decrease in price/performance for these calculations over the last decade. We currently operate two large conventional clusters, the older with over 6,800...Go to contribution page
-
Caitriana Nicholson (Graduate University of the Chinese Academy of Sciences)22/05/2012, 13:30The BES III experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPC II e+e- collider to study physics in the τ-charm energy region around 3.7 GeV; BEPC II has produced the world’s largest samples of J/ψ and ψ’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very...Go to contribution page
-
Dr oelg lodygensky (LAL - IN2P3 - CNRS)22/05/2012, 13:30Desktop grid (DG) is a well known technology aggregating volunteer computing resources donated by individuals to dynamically construct a virtual cluster. A lot of efforts are done these last years to extend and interconnect desktop grids to other distributed computing resources, especially focusing on so called “service grids” middleware such as “gLite”, “ARC” and “Unicore”. In the former...Go to contribution page
-
Mr Philippe Galvez (CALTECH)22/05/2012, 13:30Collaboration Tools, Videoconference, support for large scale scientific collaborations, HD videoGo to contribution page
-
Dr Tony Wildish (Princeton University)22/05/2012, 13:30PhEDEx is the data-movement solution for CMS at the LHC. Created in 2004, it is now one of the longest-lived components of the CMS dataflow/workflow world. As such, it has undergone significant evolution over time, and continues to evolve today, despite being a fully mature system. Originally a toolkit of agents and utilities dedicated to specific tasks, it is becoming a more open framework...Go to contribution page
-
Adrien Devresse (University of Nancy I (FR))22/05/2012, 13:30The Grid File Access Library ( GFAL ) is a library designed for a universal and simple access to grid storage systems. Re-designed and re-written completely, the 2.0 version of GFAL provides a complete abstraction of the complexity and heterogeneity of the grid storage systems ( DPM, LFC, Dcache, Storm, arc, ...) and of the data management protocols ( RFIO, gsidcap, LFN, dcap, SRM,...Go to contribution page
-
Mr Igor Sfiligoi (INFN LABORATORI NAZIONALI DI FRASCATI)22/05/2012, 13:30Multi-user pilot infrastructures provide significant advantages for the communities using them, but also create new security challenges. With Grid authorization and mapping happening with the pilot credential only, final user identity is not properly addressed in the classic Grid paradigm. In order to solve this problem, OSG and EGI have deployed glexec, a privileged executable on the worker...Go to contribution page
-
Federico Stagni (CERN)22/05/2012, 13:30Within the DIRAC framework in the LHCb collaboration, we deployed an autonomous policy system acting as a central status information point for grid elements. Experts working as grid administrators have a broad and very deep knowledge about the underlying system which makes them very precious. We have attempted to formalize this knowledge in an autonomous system able to aggregate information,...Go to contribution page
-
Dr Kilian Schwarz (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))22/05/2012, 13:30The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a Tier2 center for ALICE@CERN. The central component of the GSI computing facility...Go to contribution page
-
Mr Laurence Field (CERN)22/05/2012, 13:30The primary goal of a Grid information system is to display the current composition and state of a Grid infrastructure. It's purpose is to provide the information required for workload and data management. As these models evolve, the information system requirements need to be revisited and revised. This paper first documents the results from a recent survey of LHC VOs on the information system...Go to contribution page
-
Bogdan Lobodzinski (DESY)22/05/2012, 13:30The H1 Collaboration at HERA is now in the era of high precision analyses based on the final and complete data sample. A natural consequence of this is the huge increase in requirement for simulated Monte Carlo (MC) events. As a response to this increase, a framework for large scale MC production using the LCG Grid Infrastructure was developed. After 3 years, the H1 MC Computing...Go to contribution page
-
Lukasz Kokoszkiewicz (CERN)22/05/2012, 13:30The hBrowse framework is a generic monitoring tool designed to meet the needs of various communities connected to grid computing. It is strongly configurable and easy to adjust and implement accordingly to a specific community needs. It's a html/JavaScript client side application utilizing the latest web technologies to provide presentation layer to any hierarchical data structures. Each part...Go to contribution page
-
Olivier Raginel (Massachusetts Inst. of Technology (US))22/05/2012, 13:30The CMS experiment online cluster consists of 2300 computers and170 switches or routers operating on a 24 hour basis. This huge infrastructure must be monitored in a way that the administrators are proactively warned of any failures or degradation in the system, in order to avoid or minimize downtime of the system which can lead to loss of data taking. The number of metrics monitored per host...Go to contribution page
-
Miguel Coelho Dos Santos (CERN)22/05/2012, 13:30With many servers and server parts the environment of warehouse sized data centers is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed "hardware hound" focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and...Go to contribution page
-
Dr Gabriele Garzoglio (FERMI NATIONAL ACCELERATOR LABORATORY)22/05/2012, 13:30By the end of 2011, a number of US Department of Energy (DOE) National Laboratories will have access to a 100 Gb/s wide-area network backbone. The ESnet Advanced Networking Initiative (ANI) project is intended to develop a prototype network, based on emerging 100 Gb/s ethernet technology. The ANI network will support DOE’s science research programs. A 100 Gb/s network testbed is a key...Go to contribution page
-
Mr Miguel Villaplana Perez (Universidad de Valencia (ES))22/05/2012, 13:30The ATLAS Tier3 at IFIC-Valencia is attached to a Tier2 that has 50% of the Spanish Federated Tier2 resources. In its design, the Tier3 includes a GRID-aware part that shares some of the features of Valencia's Tier2 such as using Lustre as a file system. ATLAS users, 70% of IFIC's users, also have the possibility of analysing data with a PROOF farm and storing them locally. In this...Go to contribution page
-
Federica Legger (Ludwig-Maximilians-Univ. Muenchen)22/05/2012, 13:30With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more to come in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation,...Go to contribution page
-
Mr Andrea Chierici (INFN-CNAF)22/05/2012, 13:30This work shows the optimizations we have been investigating and implementing at the KVM virtualization layer in the INFN Tier-1 at CNAF, based on more than a year of experience in running thousands of virtual machines in a production environment used by several international collaborations. These optimizations increase the adaptability of virtualization solutions to demanding...Go to contribution page
-
Mr Pier Paolo Ricci (INFN CNAF)22/05/2012, 13:30The INFN Tier1 at CNAF is the first level Italian High Energy Physics computing center that shares resources to the scientific community using the grid infrastructure. The Tier1 is composed of a very complex infrastructure divided into different parts: the hardware layer, the storage services, the computing resources (i.e. worker nodes adopted for analysis and other activities) and...Go to contribution page
-
Andrew Mcnab (University of Manchester)22/05/2012, 13:30We describe our experience of operating a large Tier-2 site since 2005 and how we have developed an integrated management system using third-party, open source components. This system tracks individual assets and records their attributes such as MAC and IP addresses; derives DNS and DHCP configurations from this database; creates each host's installation and re-configuration scripts; monitors...Go to contribution page
-
Dr Ana Y. Rodríguez-Marrero (Instituto de Física de Cantabria (UC-CSIC))22/05/2012, 13:30High Energy Physics (HEP) analysis are becoming more complex and demanding due to the large amount of data collected by the current experiments. The Parallel ROOT Facility (PROOF) provides researchers with an interactive tool to speed up the analysis of huge volumes of data by exploiting parallel processing on both multicore machines and computing clusters. The typical PROOF deployment...Go to contribution page
-
Maxim Potekhin (Brookhaven National Laboratory (US))22/05/2012, 13:30The PanDA Workload Management System is the basis for distributed production and analysis for the ATLAS experiment at the LHC. In this role, it relies on sophisticated dynamic data movement facilities developed in ATLAS. In certain scenarios, such as small research teams in ATLAS Tier-3 sites and non-ATLAS Virtual Organizations supported by the Open Science Grid consortium (OSG), the overhead...Go to contribution page
-
Albert Puig Navarro (University of Barcelona (ES))22/05/2012, 13:30The gUSE (Grid User Support Environment) framework allows to create, store and distribute application workflows. This workflow architecture includes a wide variety of payload execution operations, such as loops, conditional execution of jobs and combination of output. These complex multi-job workflows can easily be created and modified by application developers through the WS-PGRADE portal....Go to contribution page
-
Gabriele Garzoglio (Fermi National Accelerator Laboratory)22/05/2012, 13:30In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies...Go to contribution page
-
Tomas Kouba (Acad. of Sciences of the Czech Rep. (CZ))22/05/2012, 13:30Computing Centre of the Institute of Physics in Prague provides computing and storage resources for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT....Go to contribution page
-
Stephen Gowdy (CERN)22/05/2012, 13:30The work is focused on the creation and validation tests of a replica and transfers system for Computational Grids inspired on the needs of the High Energy Physics (HEP). Due to the high volume of data created by the HEP experiments, an efficient file and dataset replica system may play an important role on the computing model. Data replica systems allow the creation of copies,...Go to contribution page
-
Michael John Kenyon (CERN)22/05/2012, 13:30Ganga is an easy-to-use frontend for the definition and management of analysis jobs, providing a uniform interface across multiple distributed computing systems. It is the main end-user distributed analysis tool for the ATLAS and LHCb experiments and provides the foundation layer for the HammerCloud sytem, used by the LHC experiments for validation and stress testing of their numerous...Go to contribution page
-
Waseem Daher (Oracle)22/05/2012, 13:30Today, every OS in the world requires regular reboots in order to be up to date and secure. Since reboots cause downtime and disruption, sysadmins are forced to choose between security and convenience. Until Ksplice. Ksplice is new technology that can patch a kernel while the system is running, with no disruption whatsoever. We use this technology to provide Ksplice Uptrack, a service that...Go to contribution page
-
Federico Stagni (CERN)22/05/2012, 13:30We present LHCbDIRAC, an extension of the DIRAC community Grid solution to handle the LHCb specificities. The DIRAC software has been developed for many years within LHCb only. Nowadays it is a generic software, used by many scientific communities worldwide. Each community wanting to take advantage of DIRAC has to develop an extension, containing all the necessary code for handling their...Go to contribution page
-
Artem Harutyunyan (CERN), Dag Larsen (University of Bergen (NO))22/05/2012, 13:30Long-term preservation of scientific data represents a challenge to all experiments. Even after an experiment has reached its end of life, it may be necessary to reprocess the data. There are two aspects of long-term data preservation: "data" and "software". While data can be preserved by migration, it is more complicated for the software. Preserving source code and binaries is not enough; the...Go to contribution page
-
Dr Ulrich Schwickerath (CERN)22/05/2012, 13:30In 2008 CERN launched a project aiming at virtualising the batch farm. It strictly distinguishes between infrastructure and guests, and is thus able to serve, along with its initial batch farm target, as an IaaS infrastructure, which can be exposed to users. The system was put into production at small scale at Christmas 2010, and has since grown to almost 500 virtual machine slots in spring...Go to contribution page
-
Dr Stefan Roiser (CERN)22/05/2012, 13:30The increase of luminosity in the LHC during its second year of operation (2011) was achieved by delivering more protons per bunch and increasing the number of bunches. This change of running conditions required some changes in the LHCb Computing Model. The consequences of the higher pileup are a bigger event size and processing time but also the possibility for LHCb to propose and get...Go to contribution page
-
Dr Daniele Bonacorsi (Universita e INFN (IT))22/05/2012, 13:30The LHCONE project aims to provide effective entry points into a network infrastructure that is intended to be private to the LHC Tiers. This infrastructure is not intended to replace the LHCOPN, which connects the highest tiers, but rather to complement it, addressing the connection needs of the LHC Tier-2 and Tier-3 sites which have become more important in the new less-hierarchical...Go to contribution page
-
Dr Xavier Espinal Curull (Universitat Autònoma de Barcelona (ES))22/05/2012, 13:30Installation and post-installation mechanisms are critical points for the computing centres to streamline production services. Managing hundreds of nodes is a challenge for any computing centre and there are many tools able to cope with this problem. The desired features includes the ability to do incremental configuration (no need to bootstrap the service to make it manageable by the tool),...Go to contribution page
-
Ioannis Charalampidis (Aristotle Univ. of Thessaloniki (GR))22/05/2012, 13:30The creation and maintenance of a Virtual Machine (VM) is a complex process. To build the VM image, thousands of software packages have to be collected, disk images suitable for different hypervisors have to be built, integrity tests must be performed, and eventually the resulting images have to become available for download. In the meanwhile, software updates for the older versions must be...Go to contribution page
-
Andrzej Nowak (CERN openlab)22/05/2012, 13:30The continued progression of Moore’s law has led to many-core platforms becoming easily accessible commodity equipment. New opportunities that arose from this change have also brought new challenges: harnessing the raw potential of computation of such a platform is not always a straightforward task. This paper describes practical experience coming out of the work with many-core systems at CERN...Go to contribution page
-
Prof. Roger Jones (Lancaster University (GB))22/05/2012, 13:30MARDI-Gross builds on previous work with the LIGO collaboration, using the ATLAS experiment as a use case to develop a tool-kit on data management for people making proposals for large High Energy Physics experiments, as well a experiments such as LIGO and LOFAR, and also for those assessing such proposals. The toolkit will also be of interest to those in the active data management for new and...Go to contribution page
-
Dr Santiago Gonzalez De La Hoz (IFIC-Valencia)22/05/2012, 13:30The ATLAS computing and data models have moved/are moving away from the strict MONARC model (hierarchy) to a mesh model. Evolution of computing models also requires evolution of network infrastructure to enable any Tier2 and Tier3 to easily connect to any Tier1 or Tier2. In this way some changing of the data model are required: a) Any site can replicate data from any other site. b) Dynamic...Go to contribution page
-
David Cameron (University of Oslo (NO))22/05/2012, 13:30Monitoring of Grid services is essential to provide a smooth experience for users and provide fast and easy to understand diagnostics for administrators running the services. GangliARC makes use of the widely-used Ganglia monitoring tool to present web-based graphical metrics of the ARC computing element. These include statistics of running and finished jobs, data transfer metrics, as well as...Go to contribution page
-
Ilija Vukotic (Universite de Paris-Sud 11 (FR))22/05/2012, 13:30Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a...Go to contribution page
-
Jorge Amando Molina-Perez (Univ. of California San Diego (US))22/05/2012, 13:30The CMS offline computing system is composed of more than 50 sites and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors...Go to contribution page
-
Ms Vanessa Hamar (CPPM-IN2P3-CNRS)22/05/2012, 13:30Parallel job execution in the grid environment using MPI technology presents a number of challenges for the sites providing this support. Multiple flavors of the MPI libraries, shared working directories required by certain applications, special settings for the batch systems make the MPI support difficult for the site managers. On the other hand the workload management systems with pilot jobs...Go to contribution page
-
Mr Fabio Hernandez (IN2P3/CNRS Computing Centre & IHEP Computing Centre)22/05/2012, 13:30By aggregating the storage capacity of hundreds of sites around the world, distributed data-processing platforms such as the LHC computing grid offer solutions for transporting, storing and processing massive amounts of experimental data, addressing the requirements of virtual organizations as a whole. However, from our perspective, individual workflows require a higher level of flexibility,...Go to contribution page
-
Ivan Fedorko (CERN)22/05/2012, 13:30In the last few years, new requirements have been received for visualization of monitoring data: advanced graphics, flexibility in configuration and decoupling of the presentation layer from the monitoring repository. Lemonweb is the data visualization component of the LHC Era Monitoring (Lemon) system. Lemonweb consists of two sub-components: a data collector and a web visualization...Go to contribution page
-
Mr Massimo Sgaravatto (Universita e INFN (IT))22/05/2012, 13:30The EU-funded project EMI, now at its second year, aims at providing a unified, standardized, easy to install software for distributed computing infrastructures. CREAM is one of the middleware product part of the EMI middleware distribution: it implements a Grid job management service which allows the submission, management and monitoring of computational jobs to local resource management...Go to contribution page
-
Alessandro Di Girolamo (CERN), Dr Andrea Sciaba (CERN)22/05/2012, 13:30Since several years the LHC experiments rely on the WLCG Service Availability Monitoring framework (SAM) to run functional tests on their distributed computing systems. The SAM tests have become an essential tool to measure the reliability of the Grid infrastructure and to ensure reliable computing operations, both for the sites and the experiments. Recently the old SAM framework was replaced...Go to contribution page
-
Natalia Ratnikova (KIT - Karlsruhe Institute of Technology (DE))22/05/2012, 13:30The CMS experiment has to move Petabytes of data among dozens of computing centres with low latency in order to make efficient use of its resources. Transfer operations are well established to achieve the desired level of throughput, but operators lack a system to identify early on transfers that will need manual intervention to reach completion. File transfer latencies are sensitive to the...Go to contribution page
-
Julien Leduc22/05/2012, 13:30Newer generations of processors come with no increase in their clock frequency, and the same is true for memory chips. In order to achieve more performance, the core count is getting higher, and to feed all the cores on a chip with instructions and data, the number of memory channels must follow the same trend. Non Uniform Memory Access (NUMA) architecture allowed the CPU manufacturers to...Go to contribution page
-
Simon William Fayer (Imperial College Sci., Tech. & Med. (GB)), Stuart Wakefield (Imperial College Sci., Tech. & Med. (GB))22/05/2012, 13:30Reading and writing data onto a disk based high capacity storage system has long been a troublesome task. While disks handle sequential reads and writes well, when they are interleaved performance drops off rapidly due to the time required to move the disk's read-write head(s) to a different position. An obvious solution to this problem is to replace the disks with an alternative storage...Go to contribution page
-
Dr Giuseppe Bagliesi (INFN Sezione di Pisa)22/05/2012, 13:30While the model for a Tier2 is well understood and implemented within the HEP Community, a refined design for Analysis specific sites has not been agreed upon as clearly. We aim to describe the solutions adopted at the INFN Pisa, the biggest Tier2 in the Italian HEP Community. A Standard Tier2 infrastructure is optimized for GRID CPU and Storage access, while a more interactive oriented use of...Go to contribution page
-
Andreas Gellrich (DESY)22/05/2012, 13:30DESY is one of the largest WLCG Tier-2 centres for ATLAS, CMS and LHCb world-wide and the home of a number of global VOs. At the DESY-HH Grid site more than 20 VOs are supported by one common Grid infrastructure to allow for the opportunistic usage of federated resources. The VOs share roughly 4800 job slots in 800 physical CPUs of 400 hosts operated by a TORQUE/MAUI batch system. On...Go to contribution page
-
Gerardo GANIS (CERN)22/05/2012, 13:30With advent of the analysis phase of LHC data-processing, interest in PROOF technology has considerably increased. While setting up a simple PROOF cluster for basic usage is reasonably straightforward, exploiting the several new functionalities added in recent times may be complicated. PEAC, standing for PROOF Enabled Analysis Cluster, is a set of tools aiming to facilitate the setup...Go to contribution page
-
Sam Skipsey (University of Glasgow / GridPP)22/05/2012, 13:30While, historically, Grid Storage Elements have relied on semi-proprietary protocols for data transfer (gridftp for site-to-site, and (rfio/dcap/other) for local transfers) ), the rest of the world has not stood still in providing its own solutions to data access. dCache, DPM and StoRM all now support access via the widely implemented HTTP/WebDAV standard, and dCache and DPM both support...Go to contribution page
-
José Flix22/05/2012, 13:30CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. CMS experiment relies on File Transfer Services (FTS) for data distribution, a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and...Go to contribution page
-
Stephen Gowdy (CERN)22/05/2012, 13:30The CERN Virtual Machine (CernVM) Software Appliance is a project developed in CERN with the goal of allowing the execution of the experiment's software on different operating systems in an easy way for the users. To achieve this it makes use of Virtual Machine images consisting of a JEOS (Just Enough Operational System) Linux image, bundled with CVMFS, a distributed file system for software....Go to contribution page
-
Dr Dirk Hoffmann (CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France)22/05/2012, 13:30on behalf of the PLUME Technical Committee <http://projet-plume.org>" for the PLUME abstract. PLUME - FEATHER is a non-profit project created to Promote economicaL, Useful and Maintained softwarE For the Higher Education And THE Research communities. The site references software, mainly Free/Libre Open Source Software (FLOSS) from French universities and national research organisations,...Go to contribution page
-
Vincent Garonne (CERN)22/05/2012, 13:30This paper describes a user monitoring framework for very large data management systems that maintain high numbers of data movement transactions. The proposed framework prescribes a method for generating meaningful information from collected tracing data that allows the data management system to be queried on demand for specific user usage patterns in respect to source and destination...Go to contribution page
-
Kati Lassila-Perini (Helsinki Institute of Physics (FI))22/05/2012, 13:30The data collected by the LHC experiments are unique and present an opportunity and a challenge for a long-term preservation and re-use. The CMS experiment is defining a policy for the data preservation and access to its data and is starting the implementation of the policy. This note describes the driving principles of the policy and summarises the actions and activities which are planned for...Go to contribution page
-
Mine Altunay (Fermi National Accelerator Laboratory)22/05/2012, 13:30Identity management infrastructure has been a key work area for the Open Science Grid (OSG) security team for the past year. The progress of web-based authentication protocols such as openID, SAML, and scientific federations such as InCommon, prompted OSG to evaluate its current identity management infrastructure and propose ways to incorporate new protocols and methods. For the couple...Go to contribution page
-
Marko Petek (Universidade do Estado do Rio de Janeiro (BR))22/05/2012, 13:30The motivation of this work is about the ongoing efforts to integrate the CMS Computing Model with a project of volunteer computing under development at CERN, the LHC@home, thus allowing the CMS Analysis jobs and Monte Carlo production activities to be executed on this paradigm that has a growing user base. The LCH@home project allows the use of the CernVM (a virtual machine technology...Go to contribution page
-
Alexey SEDOV (Universitat Autònoma de Barcelona (ES))22/05/2012, 13:30We present the prototype deployment of a private cloud at PIC and the tests performed in the context of providing a computing service for ATLAS. The prototype is based on the OpenNebula open source cloud computing solution. The possibility of using CernVM virtual machines as the standard for ATLAS cloud computing is evaluated by deploying a Panda pilot agent as part of the VM...Go to contribution page
-
Julia Andreeva (CERN)22/05/2012, 13:30The WLCG Transfer Dashboard is a monitoring system which aims to provide a global view of the WLCG data transfers and to reduce redundancy of monitoring tasks performed by the LHC experiments. The system is designed to work transparently across LHC experiments and across various technologies used for data transfer. Currently every LHC experiment monitors data transfers via experiment-specific...Go to contribution page
-
Christopher Hollowell (Brookhaven National Laboratory)22/05/2012, 13:30Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2000 hosts running Scientific Linux and Red Hat Enterprise Linux. ...Go to contribution page
-
Julia Yarba (Fermi National Accelerator Lab. (US))22/05/2012, 13:30In the past year several improvements in Geant4 hadronic physics code have been made, both for HEP and nuclear physics applications. We discuss the implications of these changes for physics simulation performance and user code. In this context several of the most-used codes will be covered briefly. These include the Fritiof (FTF) parton string model which has been extended to...Go to contribution page
-
Paul Nilsson (University of Texas at Arlington (US))22/05/2012, 13:30The Production and Distributed Analysis system (PanDA) in the ATLAS experiment uses pilots to execute submitted jobs on the worker nodes. The pilots are designed to deal with different runtime conditions and failure scenarios, and support many storage systems. This talk will give a brief overview of the PanDA pilot system and will present major features and recent improvements including...Go to contribution page
-
Gavin Mccance (CERN)22/05/2012, 13:30The CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future. There have been significant developments in the area of computer centre and configuration management tools over the last few years. CERN is examining how these modern, widely-used tools can improve the way in which we manage the centre, with a view to reducing the overall...Go to contribution page
-
Alexander Moibenko (Fermilab)22/05/2012, 13:30By 2009 the Fermilab Mass Storage System had encountered several challenges: 1. The required amount of data stored and accessed in both tiers of the system (dCache and Enstore)had significantly increased. 2. The number of clients accessing Mass Storage System had also increased from tens to hundreds of nodes and from hundreds to thousands of parallel requests. To address these...Go to contribution page
-
Arne Wiebalck (CERN)22/05/2012, 13:30Serving more than 3 billion accesses per day, the CERN AFS cell is one of the most active installations in the world. Limited by overall cost, the ever increasing demand for more space and higher I/O rates drive an architectural change from small high-end disks organised in fibre-channel fabrics towards external SAS based storage units with large commodity drives. The presentation...Go to contribution page
-
Valerie Hendrix (Lawrence Berkeley National Lab. (US))22/05/2012, 13:30Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited knowledge of the administration of such clusters can be strained by such maintenance...Go to contribution page
-
Dr Dimitri Bourilkov (University of Florida (US))22/05/2012, 13:30This paper reports the design and implementation of a secure, wide area network, distributed filesystem by the ExTENCI project, based on the Lustre filesystem. The system is used for remote access to analysis data from the CMS experiment at the Large Hadron Collider, and from the Lattice Quantum ChromoDynamics (LQCD) project. Security is provided by Kerberos authentication and authorization...Go to contribution page
-
Mr Pedro Manuel Rodrigues De Sousa Andrade (CERN)22/05/2012, 13:30The Worldwide LHC Computing Grid (WLCG) infrastructure continuously operates thousands of grid services scattered around hundreds of sites. Participating sites are organized in regions and support several virtual organizations, thus creating a very complex and heterogeneous environment. The Service Availability Monitoring (SAM) framework is responsible for the monitoring of this...Go to contribution page
-
Alessandro Di Girolamo (CERN), Fernando Harald Barreiro Megino (CERN IT ES)22/05/2012, 13:30The LHC experiments' computing infrastructure is hosted in a distributed way across different computing centers in the Worldwide LHC Computing Grid and needs to run with high reliability. It is therefore crucial to offer a unified view to shifters, who generally are not experts in the services, and give them the ability to follow the status of resources and the health of critical systems in...Go to contribution page
-
Alessandro De Salvo (Universita e INFN, Roma I (IT))22/05/2012, 13:30The ATLAS Collaboration is managing one of the largest collections of software among the High Energy Physics Experiments. Traditionally this software has been distributed via rpm or pacman packages, and has been installed in every site and user's machine, using more space than needed since the releases could not always share common binaries. As soon as the software has grown in size and...Go to contribution page
-
Dr Giacinto Donvito (INFN-Bari)22/05/2012, 13:30Nowadays the storage systems are evolving not only in size but also in terms of used technologies. SSD disks are currently introduced in storage facilities for HEP experiments and their performance is tested in comparison with standard magnetic disks. The tests are performed by running a real CMS data analysis for a typical use case and exploiting the features provided by PROOF-Lite, that...Go to contribution page
-
Sebastien Ponce (CERN)22/05/2012, 13:30This is an update on CASTOR (CERN Advanced Storage) describing the recent evolution and related experience in production during the latest high-intensity LHC runs. In order to handle the increasing data rates (10GB/s average for 2011), several major improvements have been introduced. We describe in particular the new scheduling system that has replaced the original CASTOR one. It removed the...Go to contribution page
-
Dr Andrei Tsaregorodtsev (Universite d'Aix - Marseille II (FR))22/05/2012, 13:30The DIRAC Project was initiated to provide a data processing system for the LHCb Experiment at CERN. It provides all the necessary functionality and performance to satisfy the current and projected future requirements of the LHCb Computing Model. A considerable restructuring of the DIRAC software was undertaken in order to turn it into a general purpose framework for building distributed...Go to contribution page
-
Dr Tomas Linden (Helsinki Institute of Physics (FI))22/05/2012, 13:30Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires...Go to contribution page
-
Dr Gabriele Garzoglio (FERMI NATIONAL ACCELERATOR LABORATORY)22/05/2012, 13:30The Open Science Grid (OSG) supports a diverse community of new and existing users to adopt and make effective use of the Distributed High Throughput Computing (DHTC) model. The LHC user community has deep local support within the experiments. For other smaller communities and individual users the OSG provides a suite of consulting and technical services through the User Support organization....Go to contribution page
-
Fabrizio Furano (CERN)22/05/2012, 13:30Born in the context of EMI (European Middleware Initiative), the SYNCAT project considers as its main purpose the incremental reduction of the divergence of the content of remote file catalogues, like the ones represented by LFC, the Grid Storage Elements and the experiments' private databases. Aiming at giving ways for these remote systems to interact transparently in order to keep their...Go to contribution page
-
Stuart Purdie (University of Glasgow)22/05/2012, 13:30Failure is endemic in the Grid world - as with any large, distributed computer system, at some point things will go wrong. Wether it is down to a problem with hardware, network or software, the shear size of a production Grid requires operation under the assumption that some of the jobs will fail. Some of those are anavoidable (e.g. network loss during data staging), some are preventable but...Go to contribution page
-
Dr Adam Lyon (Fermilab)22/05/2012, 13:30Fermilab Intensity Frontier experiments like Minerva, NOvA, g-2 and Mu2e currently operate without an organized data handling system, relying instead on completely manual management of files on large central disk arrays at Fermilab. This model severely limits the computing resources that the experiments can leverage to those tied to the Fermilab site, prevents the use of coherent staging and...Go to contribution page
-
Sam Skipsey (University of Glasgow / GridPP)22/05/2012, 13:30The caching, http-mediated filesystem "CVMFS", while first developed for use with the Cern Virtual Machines project, has quickly become a significant part of several VOs software distribution policy, with ATLAS being particularly interested. The benefits of CVMFS do not simply extend to large VOs, however; small virtual organisations can find software distribution to be problematic, as they...Go to contribution page
-
German Cancio Melia (CERN)22/05/2012, 13:30With currently around 55PB of data stored on over 49000 cartridges, and around 2PB of fresh data coming every month, CERN’s large tape infrastructure is continuing its growth. In this contribution, we will detail out the progress achieved and the ongoing steps towards our strategy of turning tape storage from a HSM environment into a sustainable long-term archiving solution. In particular, we...Go to contribution page
-
Steven Murray (CERN)22/05/2012, 13:30The CERN Advanced STORage manager (CASTOR) is used to archive to tape the physics data of past and present physics experiments. Data is migrated (repacked) from older, lower density tapes to newer, high-density tapes approximately every two years to follow the evolution of tape technologies and to keep the volume occupied by the tape cartridges relatively stable. Improving the performance of...Go to contribution page
-
Dr Silvio Pardi (INFN)22/05/2012, 13:30The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. This luminosity translate in the...Go to contribution page
-
Mr Donato De Girolamo (INFN-CNAF)22/05/2012, 13:30The monitoring and alert system is fundamental for the management and the operation of the network in a large data center such as an LHC Tier-1. The network of the INFN Tier-1 at CNAF is a multi-vendor environment: for its management and monitoring several tools have been adopted and different sensors have been developed. In this paper, after an overview on the different aspects to be...Go to contribution page
-
Donato De Girolamo (INFN), Stefano Zani22/05/2012, 13:30The monitoring and alert system is fundamental for the management and the operation of the network in a large data center such as an LHC Tier-1. The network of the INFN Tier-1 at CNAF is a multi-vendor environment: for its management and monitoring several tools have been adopted and different sensors have been developed. In this paper, after an overview on the different aspects to be...Go to contribution page
-
Lorenzo RINALDI (INFN CNAF (IT))22/05/2012, 13:30The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, involving many Computing Centres spread around the world. The computing workload is managed by regional federations, called Clouds. The Italian Cloud consists of a main (Tier-1) centre, located in Bologna, four secondary (Tier-2) centres, and a few smaller (Tier-3)...Go to contribution page
-
Vincent Garonne (CERN)22/05/2012, 13:30The DDM Tracer Service is aimed to trace and monitor the atlas file operations on the Worldwide LHC Computing Grid. The volume of traces has increased significantly since the service started in 2009. Now there are about ~5 million trace messages every day and peaks of greater than 250Hz, with peak rates continuing to climb, which gives the current service structure a big challenge. Analysis...Go to contribution page
-
Fabrizio Furano (CERN)22/05/2012, 13:30ATLAS decided to move from a globally distributed file catalogue to a central instance at CERN. This talk describes the ATLAS LFC merge exercise from the analysis phase over the prototyping and stress testing to the final execution phase. We demonstrate that with careful preparation even major architectural changes could be implemented while minimizing the impact on the experiments...Go to contribution page
-
Mr Igor Sfiligoi (University of California San Diego)22/05/2012, 13:30OSG has been operating for a few years at UCSD a glideinWMS factory for several scientific communities, including CMS analysis, HCC and GLOW. This setup worked fine, but it had become a single point of failure. OSG thus recently added another instance at Indiana University, serving the same user communities. Similarly, CMS has been operating a glidein factory dedicated to reprocessing...Go to contribution page
-
Mr Milosz Zdybal (Institute of Nuclear Physics)22/05/2012, 13:30Providing computer infrastructure to end-users in an efficient and user-friendly way was always a big challenge in the IT market. “Cloud computing” is an approach that addresses these issues and recently it has been gaining more and more popularity. A well designed Cloud Computing system gives elasticity in resources allocation and allows for efficient usage of computing infrastructure. The...Go to contribution page
-
Dr Stefano Dal Pra (INFN)22/05/2012, 13:30Keeping track of the layout of the informatic resources in a big datacenter is a complex task. DOCET is a database-based webtool designed and implemented at INFN. It aims at providing a uniform interface to manage and retrieve needed information about one or more datacentre, such as available hardware, software and their status. Having a suitable application is however useless until...Go to contribution page
-
Andreas Haupt (Deutsches Elektronen-Synchrotron (DE))22/05/2012, 13:30DESY is one of the world-wide leading centers for research with particle accelerators, synchrotron light and astroparticles. DESY participates in LHC as a Tier-2 center, supports on-going analyzes of HERA data, is a leading partner for ILC, and runs the National Analysis Facility (NAF) for LHC and ILC in the framework of the Helmholtz Alliance, Physics at the Terascale. For the research with...Go to contribution page
-
Dmitry Ozerov (Deutsches Elektronen-Synchrotron (DE)), Yves Kemp (Deutsches Elektronen-Synchrotron (DE))22/05/2012, 13:30Since mid of 2010, the Scientific Computing department at DESY is operating a storage and data access evaluation laboratory, DESY Grid Lab, equipped with 256 CPU cores, and about 80 Tbytes of data distributed among 5 servers and interconnected via up to 10-GiGE links. The system has been dimensioned to be equivalent to the size of a medium WLCG Tier 2 center to provide commonly exploitable...Go to contribution page
-
Mr Kazuhiro Terao (MIT)22/05/2012, 13:30The Double Chooz reactor anti-neutrino experiment have developed a automatised system for data streaming from the detector site to the different nodes of data analysis in Europe, Japan and USA. The system both propagates and triggers the processing of data as it goes through low level data analysis. All operations (propagation and processing) are tracked file-wise in real time using DB (MySQL...Go to contribution page
-
Dr Scott Teige (Indiana University)22/05/2012, 13:30The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper...Go to contribution page
-
Dr Santiago Gonzalez De La Hoz (Universidad de Valencia (ES))22/05/2012, 13:30Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect...Go to contribution page
-
Mr Stephan Zimmer (OKC/ Stockholm University, on behalf the Fermi-LAT Collaboration)22/05/2012, 13:30The Data Handling Pipeline ("Pipeline") has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the...Go to contribution page
-
Jos Van Wezel (KIT - Karlsruhe Institute of Technology (DE))22/05/2012, 13:30Resources of large computer centers used in physics computing today. are optimised for the WLCG framework and reflect the typical data access footprint of reconstruction and analysis. A traditional Tier 1 centre like GridKa at KIT hosts thousands of hosts and many PetaBytes of disk and tape storage that is used mostly by a single community. The required size as well as the intrinsic...Go to contribution page
-
luca dell'agnello (infn)22/05/2012, 13:30INFN-CNAF is the central computing facility of INFN: it is the Italian Tier-1 for the experiments at LHC, but also one of the main Italian computing facilities of several other experiments such as BABAR, CDF, SuperB, Virgo, Argo, AMS, Pamela, MAGIC, Auger etc.. Currently there is an installed CPU capacity of 100,000 HS06, a net disk capacity of 9 PB and an equivalent amount of tape storage...Go to contribution page
-
Andrej Filipcic (Jozef Stefan Institute (SI))22/05/2012, 13:30The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC...Go to contribution page
-
Dr Tony Wildish (Princeton University (US))22/05/2012, 13:30PhEDEx is the data-transfer management solution written by CMS. It consists of agents running at each site, a website for presentation of information, and a web-based data-service for scripted access to information. The website allows users to monitor the progress of data-transfers, the status of site agents and links between sites, and the overall status and behaviour of everything about...Go to contribution page
-
Lionel Cons (CERN), Massimo Paladin (Universita degli Studi di Udine)22/05/2012, 13:30Messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The messaging service currently used by WLCG is operated by EGI and consists of four tightly coupled brokers running ActiveMQ and designed to host the Grid operational tools such as SAM. This service is successfully being used by...Go to contribution page
-
Daniele Andreotti (Universita e INFN (IT)), Gianni Dalla Torre22/05/2012, 13:30The WNoDeS software framework (http://web.infn.it/wnodes) uses virtualization technologies to provide access to a common pool of dynamically allocated computing resources. WNoDeS can process batch and interactive requests, in local, Grid and Cloud environments. A problem of resource allocation in Cloud environments is the time it takes to actually allocate the resource and make it...Go to contribution page
-
Dr Stuart Wakefield (Imperial College London)22/05/2012, 13:30We present the development and first experience of a new component (termed WorkQueue) in the CMS workload management system. This component provides a link between a global request system (Request Manager) and agents (WMAgents) which process requests at compute and storage resources (known as sites). These requests typically consist of creation or processing of a data sample (possibly...Go to contribution page
-
Alessandra Forti (University of Manchester (GB))22/05/2012, 13:30In this paper we will describe primarily the experience of going through an EU procurement. We will describe what a PQQ (Pre-Qualification Questionaire) is and some of the requirments for vendors such as ITIL and PRINCE2 project management qualifications. We will describe how the technical part was written including requirements from the main users and the university logistic requirements to...Go to contribution page
-
Georgiana Lavinia Darlea (Polytechnic University of Bucharest (RO))22/05/2012, 13:30In the ATLAS experiment the collection, processing, selection and conveyance of event data from the detector front-end electronics to mass storage is performed by the ATLAS online farm consisting of more than 3000 PCs with various characteristics. To assure the correct and optimal working conditions the whole online system must be constantly monitored. The monitoring system should be able to...Go to contribution page
-
José Flix22/05/2012, 13:30The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behavior of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The...Go to contribution page
-
Dr Peter Kreuzer (RWTH Aachen)22/05/2012, 13:30In the large LHC experiments the majority of computing resources are provided by the participating countries. These resource pledges account for more than three quarters of the total available computing. The experiments are asked to give indications of their requests three years in advance and to evolve these as the details and constraints become clearer. In this presentation we will discuss...Go to contribution page
-
Alessandra Forti (University of Manchester (GB))22/05/2012, 13:30In this paper we will present the efforts carried out in the UK to fix the WAN transfers problem highlighted by the ATLAS sonar tests. We will present the work done at site level, the monitoring tools at local level on the machines (ifstat, tcpdump, netstat...), between sites (iperf) and at FTS level monitoring. We will describe the effort to setup a mini-mesh to simplify the sonar tests setup...Go to contribution page
-
Georgiana Lavinia Darlea (Polytechnic University of Bucharest (RO))22/05/2012, 13:30The ATLAS Online farm is a non-homogeneous cluster of more than 3000 PCs which run the data acquisition, trigger and control of the ATLAS detector. The systems are configured and monitored by a combination of open-source tools, such as Quattor and Nagios, and tools developed in-house, such as ConfDB. We report on the ongoing introduction of new provisioning and configuration tools, Puppet...Go to contribution page
-
Anders Waananen (Niels Bohr Institute)22/05/2012, 13:30Modern HEP related calculations have traditionally been beyond the capabilities of donated desktop machines, particularly because of complex deployment of the needed software. The popularization of efficient virtual machine technology and in particular the CernVM appliance, that allows for only the needed subset of the ATLAS software environment to be dynamically downloaded, has made such...Go to contribution page
-
Hassen Riahi (Universita e INFN (IT))22/05/2012, 13:30Data storage and access represent the key of CPU-intensive and data-intensive high performance Grid computing. Hadoop is an open-source data processing framework that includes, fault-tolerant and scalable, distributed data processing model and execution environment, named MapReduce, and distributed file system, named Hadoop distributed file system (HDFS). HDFS was deployed and tested within...Go to contribution page
-
Dr Dimitri Bourilkov (University of Florida (US))22/05/2012, 13:30We describe the work on creating system images of Lustre virtual clients in the ExTENCI project, using several virtual technologies (KVM, XEN, VMware). These virtual machines can be built at several levels, from a basic Linux installation (we use Scientific Linux 5 as an example), adding a Lustre client with Kerberos authentication, and up to complete clients including local or distributed...Go to contribution page
-
Andrea Dotti (CERN)22/05/2012, 13:30In this paper we present the Geant4 validation and testing suite. The application is used to test any new Geant4 release. The simulation of a particularly demanding use-case (High Energy Physics calorimeters) is tested with different physics parameters. The suite is integrated with a job submission system that allows for the generation of high statistics data-sets on distributed resources....Go to contribution page
-
Andreas Gellrich (DESY)22/05/2012, 13:30Virtualization techniques have become a key topic in computing in the last years. In the Grid, discussions on the virtualization of worker nodes is most prominent. Currently, concepts for the provenience and sharing if images are under debate. The virtualization of Grid servers though is already a common and successful practice. At DESY, one of the largest WLCG Tier-2 centres world-wide and...Go to contribution page
-
William Strecker-Kellogg (Brookhaven National Lab)22/05/2012, 13:30In this presentation we will address the development of a prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM for virtualization, and the Condor batch software to manage virtual machines. The discussion provides details on our experiences with building, configuring, and deploying the various components from bare metal, including the base OS, the...Go to contribution page
-
Mikalai Kutouski (Joint Inst. for Nuclear Research (JINR))22/05/2012, 13:30The current ATLAS Tier3 infrastructure consists of a variety of sites of different sizes and with a mix of local resource management systems (LRMS) and mass storage system (MSS) implementations. The Tier3 monitoring suite, having been developed in order to satisfy the needs of Tier3 site administrators and to aggregate Tier3 monitoring information on the global VO level, needs to be validated...Go to contribution page
-
Alejandro Alvarez Ayllon (University of Cadiz), Ricardo Brito Da Rocha (CERN)22/05/2012, 13:30The Disk Pool Manager (DPM) and LCG File Catalog (LFC) are two grid data management components currently used in production at more than 240 sites. Together with a set of grid client tools they give the users a unified view of their data, hiding most details concerning data location and access. Recently we've put a lot of effort in developing a reliable and high performance HTTP/WebDAV...Go to contribution page
-
Ivano Giuseppe Talamo (Universita e INFN, Roma I (IT))22/05/2012, 13:30The LCG (Worldwide LHC Computing Grid) is a grid-based hyerarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very...Go to contribution page
-
Danilo Dongiovanni (INFN-CNAF, IGI)22/05/2012, 13:30In production Grid infrastructures deploying EMI (European Middleware Initiative) middleware release, the Workload Management System (WMS) is the service responsible for the distribution of user tasks to the remote computing resources. Monitoring the reliability of this service, the job lifecycle and the workflow pattern generated by different user communities is an important and challenging...Go to contribution page
-
Marco Cecchi (Istituto Nazionale Fisica Nucleare (IT))22/05/2012, 13:30The EU-funded project EMI, now at its second year, aims at providing a unified, high quality middleware distribution for e-Science communities. Several aspects about workload management over diverse distributed computing environments are being challenged by the EMI roadmap: enabling seamless access to both HTC and HPC computing services, implementing a commonly agreed framework for the...Go to contribution page
-
Lukasz Janyst (CERN)22/05/2012, 13:30The XRootD server framework is becoming increasingly popular in the HEP community and beyond due to its simplicity, scalability and capability to construct distributed storage federations. With the growing adoption and new use cases emerging, it has become clear that the XRootD client code has reached a stage, where a significant refactoring of the code base is necessary to remove, by now,...Go to contribution page
-
Matevz Tadel (Univ. of California San Diego (US))22/05/2012, 13:30During spring and summer 2011 CMS deployed Xrootd front-end servers on all US T1 and T2 sites. This allows for remote access to all experiment data and is used for user-analysis, visualization, running of jobs at T2s and T3s when data is not available at local sites, and as a fail-over mechanism for data-access in CMSSW jobs. Monitoring of Xrootd infrastructure is implemented on three...Go to contribution page
-
Erik Mattias Wadenstein (Unknown)22/05/2012, 13:55Computer Facilities, Production Grids and Networking (track 4)ParallelDistributed storage systems are critical to the operation of the WLCG. These systems are not limited to fulfilling the long term storage requirements. They also serve data for computational analysis and other computational jobs. Distributed storage systems provide the ability to aggregate the storage and IO capacity of disks and tapes, but at the end of the day IO rate is still bound by the...Go to contribution page
-
Marek Domaracky (CERN)22/05/2012, 13:55In recent times, we have witnessed an explosion of video initiatives in the industry worldwide. Several advancements in video technology are currently improving the way we interact and collaborate. These advancements are forcing tendencies and overall experiences: any device in any network can be used to collaborate, in most cases with an overall high quality. To cope with this technology...Go to contribution page
-
Dr Maria Grazia Pia (Universita e INFN (IT))22/05/2012, 13:55Quantitative results on Geant4 physics validation and computational performance are reported: they cover a wide spectrum of electromagnetic and hadronic processes, and are the product of a systematic, multi-disciplinary effort of collaborating physicists, nuclear engineers and statisticians. They involve comparisons with established experimental references in the literature and ad hoc...Go to contribution page
-
Victor Manuel Fernandez Albor (Universidade de Santiago de Compostela (ES)), Victor Mendez Munoz (Port d'Informació Científica (PIC))22/05/2012, 13:55Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe increasing availability of cloud resources is making the scientific community to consider a choice between Grid and Cloud. The DIRAC framework for distributed computing is an easy way to obtain resources from both systems. In this paper we explain the integration of DIRAC with a two Open-source Cloud Managers, OpenNebula and CloudStack. They are computing tools to manage the...Go to contribution page
-
Andrew Hanushevsky (STANFORD LINEAR ACCELERATOR CENTER), Wei Yang (SLAC National Accelerator Laboratory (US))22/05/2012, 13:55Software Engineering, Data Stores and Databases (track 5)PosterFor more than a year, the ATLAS Western Tier 2 (WT2) at SLAC National Accelerator has been successfully operating a two tiered storage system based on Xrootd's flexible cross-cluster data placement framework, the File Residency Manager. The architecture allows WT2 to provide both, high performance storage at the higher tier to ATLAS analysis jobs, as well as large, low cost disk capacity at...Go to contribution page
-
Marco Cattaneo (CERN)22/05/2012, 14:20The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but disk space limitations mean that only 3 kHz can be written to tape for offline processing. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. Because the trigger...Go to contribution page
-
Fernando Harald Barreiro Megino (CERN IT ES)22/05/2012, 14:20Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe ATLAS Computing Model was designed around the concepts of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present...Go to contribution page
-
Yves Kemp (Deutsches Elektronen-Synchrotron (DE))22/05/2012, 14:20Preserving data from past experiments and preserving the ability to perform analysis with old data is of growing importance in many domains of science, including High Energy Physics (HEP). A study group on this issue, DPHEP, has been established in this field to provide guidelines and a structure for international collaboration on data preservation projects in HEP. This...Go to contribution page
-
Jakob Blomer (Ludwig-Maximilians-Univ. Muenchen (DE))22/05/2012, 14:20Software Engineering, Data Stores and Databases (track 5)ParallelThe CernVM File System (CernVM-FS) is a read-only file system used to access HEP experiment software and conditions data. Files and directories are hosted on standard web servers and mounted in a universal namespace. File data and meta-data are downloaded on demand and locally cached. CernVM-FS has been originally developed to decouple the experiment software from virtual machine hard disk...Go to contribution page
-
Brian Paul Bockelman (University of Nebraska (US))22/05/2012, 14:20Computer Facilities, Production Grids and Networking (track 4)ParallelWhile the LHC data movement systems have demonstrated the ability to move data at the necessary throughput, we have identified two weaknesses: the latency for physicists to access data and the complexity of the tools involved. To address these, both ATLAS and CMS have begun to federate regional storage systems using Xrootd. Xrootd, referring to a protocol and implementation, allows us to...Go to contribution page
-
Alexander Mazurov (Universita di Ferrara (IT))22/05/2012, 14:45Software Engineering, Data Stores and Databases (track 5)ParallelThe LHCb software is based on the Gaudi framework, on top of which are built several large and complex software applications. The LHCb experiment is now in the active phase of collecting and analyzing data and significant performance problems arise in the Gaudi based software beginning from High Level Trigger (HLT) programs and ending with data analysis frameworks (DaVinci). It’s not easy to...Go to contribution page
-
Dr Xavier Espinal Curull (Universitat Autònoma de Barcelona (ES))22/05/2012, 14:45Computer Facilities, Production Grids and Networking (track 4)ParallelScientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage...Go to contribution page
-
Oliver Oberst (KIT - Karlsruhe Institute of Technology (DE))22/05/2012, 14:45Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch...Go to contribution page
-
Mr Igor Mandrichenko (Fermilab)22/05/2012, 14:45In HEP, scientific research is performed by large collaborations of organizations and individuals. Log book of a scientific collaboration is important part of the collaboration record. Often, it contains experimental data. At FNAL, we developed an Electronic Collaboration Logbook (ECL) application which is used by about 20 different collaborations, experiments and groups at FNAL. ECL is the...Go to contribution page
-
Gordon Watts (University of Washington (US))22/05/2012, 14:45Modern HEP analysis requires multiple passes over large datasets. For example, one has to first reweight the jet energy spectrum in Monte Carlo to match data before you can make plots of any other jet related variable. This requires a pass over the Monte Carlo and the Data to derive the reweighting, and then another pass over the Monte Carlo to plot the variables you are really interested in....Go to contribution page
-
Christophe Haen (Univ. Blaise Pascal Clermont-Fe. II (FR))22/05/2012, 15:10Software Engineering, Data Stores and Databases (track 5)ParallelThe LHCb online system relies on a large and heterogeneous IT infrastructure made from thousands of servers on which many different applications are running. They run a great variety of tasks : critical ones such as data taking and secondary ones like web servers. The administration of such a system and making sure it is working properly represents a very important workload for the small...Go to contribution page
-
Jason Alexander Smith (Brookhaven National Laboratory (US))22/05/2012, 15:10Computer Facilities, Production Grids and Networking (track 4)ParallelManaging the infrastructure of a large and complex data center can be extremely difficult without taking advantage of automated services. Puppet is a seasoned, open-source tool designed for enterprise-class centralized configuration management. At the RHIC/ATLAS Computing Facility at Brookhaven National Laboratory, we have adopted Puppet as part of a suite of tools, including Git, GLPI, and...Go to contribution page
-
Mr Alessandro Italiano (INFN-CNAF), Dr Giacinto Donvito (INFN-Bari)22/05/2012, 15:10Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelIn this paper we present the latest developments introduced in the WNoDeS framework (http://web.infn.it/wnodes); we will in particular describe inter-cloud connectivity, support for multiple batch systems, and coexistence of virtual and real environments on a single hardware. Specific effort has been dedicated to the work needed to deploy a "multi-sites" WNoDeS installation. The goal is to...Go to contribution page
-
Linda Coney (University of California, Riverside)22/05/2012, 15:10Project management tools like Trac are commonly used within the open-source community to coordinate projects. The Muon Ionization Cooling Experiment (MICE) uses the project management web application Redmine to host mice.rl.ac.uk. Many groups within the experiment have a Redmine project: analysis, computing and software (including offline, online, controls and monitoring, and database...Go to contribution page
-
Thomas Kuhr (KIT - Karlsruhe Institute of Technology (DE))22/05/2012, 16:35Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe Belle II experiment, a next-generation B factory experiment at KEK, is expected to record a two orders of magnitude larger data volume than its predecessor, the Belle experiment. The data size and rate are comparable to or more than the ones of LHC experiments and requires to change the computing model from the Belle way, where basically all computing resources were provided by KEK, to a...Go to contribution page
-
Mr Tigran Mkrtchyan (DESY/dCache.ORG)22/05/2012, 16:35Software Engineering, Data Stores and Databases (track 5)ParalleldCache is a high performance scalable storage system widely used by HEP community. In addition to set of home grown protocols we also provide industry standard access mechanisms like WebDAV and NFSv4.1. This support places dCache as a direct competitor to commercial solutions. Nevertheless conforming to a protocol is not enough; our implementations must perform comparably or even better than...Go to contribution page
-
Daniel Colin Van Der Ster (CERN)22/05/2012, 16:35Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelFrequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have...Go to contribution page
-
Andrew Norman (Fermilab)22/05/2012, 16:35The NOvA experiment at Fermi National Accelerator Lab, has been designed and optimized to perform a suite of measurements critical to our understanding of the neutrino’s properties, their oscillations and their interactions. NOvA presents a unique set of data acquisition and computing challenges due to the immense size of the detectors, the data volumes that are generated through the...Go to contribution page
-
Dmitry Ozerov (Deutsches Elektronen-Synchrotron (DE)), Dr Patrick Fuhrmann (DESY)22/05/2012, 17:00Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelOne of the most crucial requirement for online storage is the fast and efficient access to data. Although smart client side caching often compensates for discomforts like latencies and server disk congestion, spinning disks, with their limited ability to serve multi stream random access patterns, seem to be the cause of most of the observed inefficiencies. With the appearance of the...Go to contribution page
-
Marco Corvo (CNRS)22/05/2012, 17:00The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. These parameters require a substantial...Go to contribution page
-
Raffaello Trentadue (Universita e INFN (IT))22/05/2012, 17:00Software Engineering, Data Stores and Databases (track 5)ParallelThe LCG Persistency Framework consists of three software packages (POOL, CORAL and COOL) that address the data access requirements of the LHC experiments in several different areas. The project is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that are using some or all of the Persistency Framework components to access their data....Go to contribution page
-
Qiming Lu (Fermi National Accelerator Laboratory)22/05/2012, 17:00A complex running system, such as the NOvA online data acquisition, consists of a large number of distributed but closely interacting components. This paper describes a generic realtime correlation analysis and event identification engine, named Message Analyzer. Its purpose is to capture run time abnormalities and recognize system failures based on log messages from participating components....Go to contribution page
-
Mrs Jianlin Zhu (Central China Normal University (CN))22/05/2012, 17:00Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelA Grid is a geographically distributed environment with autonomous sites that share resources collaboratively. In this context, the main issue within a Grid is encouraging site to site interactions, increasing the trust, confidence and reliability of the sites to share resources. To achieve this, the trust concept is vital component in every service transaction, and needs to be applied in the...Go to contribution page
-
Dr Stefan Lueders (CERN)22/05/2012, 17:25Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelAccess protection is one of the cornerstones of security. The rule of least-privilege demands that any access to computer resources like computing services or web applications is restricted in such a way that only users with a need-to can access those resources. Usually this is done when authenticating the user asking her for something she knows, e.g. a (public) username and secret password....Go to contribution page
-
Parag Mhashilkar (Fermi National Accelerator Laboratory)22/05/2012, 17:25Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelGrid computing has enabled scientific communities to effectively share computing resources distributed over many independent sites. Several such communities, or Virtual Organizations (VO), in the Open Science Grid and the European Grid Infrastructure use the glideinWMS system to run complex application work-flows. GlideinWMS is a pilot-based workload management system (WMS) that creates on...Go to contribution page
-
Alastair Dewhurst (STFC - Science & Technology Facilities Council (GB))22/05/2012, 17:25Software Engineering, Data Stores and Databases (track 5)ParallelThe ATLAS experiment deployed Frontier technology world-wide during the the initial year of LHC collision data taking to enable user analysis jobs running on the World-wide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this...Go to contribution page
-
Matt Toups (Columbia University)22/05/2012, 17:25The Double Chooz (DC) reactor anti-neutrino experiment consists of a neutrino detector and a large area Outer Veto detector. A custom data-acquisition (DAQ) system written in Ada language for all the sub-detector in the neutrino detector systems and a generic object oriented data acquisition system for the Outer Veto detector were developed. Generic object-oriented programming was also used to...Go to contribution page
-
Dr florian Uhlig (GSI)22/05/2012, 17:25The FairRoot framework is an object oriented simulation, reconstruction and data analysis framework based on ROOT. It includes core services for detector simulation and offline analysis. The project started as a software framework for the CBM experiment at GSI, and later became the standard software for simulation, reconstruction and analysis for CBM, PANDA, R3B and ASYEOS at GSI/FAIR, as...Go to contribution page
-
Iwona Sakrejda, Jeff Porter (Lawrence Berkeley National Lab. (US))22/05/2012, 17:50Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe ALICE Grid infrastructure is based on AliEn, a lightweight open source framework built on Web Services and a Distributed Agent Model in which job agents are submitted onto a grid site to prepare the environment and pull work from a central task queue located at CERN. In the standard configuration, each ALICE grid site supports an ALICE-specific VO box as a single point of contact between...Go to contribution page
-
Stefano Spataro (University of Turin)22/05/2012, 17:50The PANDA experiment will study the collisions of beams of anti-protons, with momenta ranging from 2-15 GeV/c, with fixed proton and nuclear targets in the charm energy range, and will be built at the FAIR facility. In preparation for the experiment, the PandaRoot software framework is under development for detector simulation, reconstruction and data analysis, running on an Alien2-based grid....Go to contribution page
-
Giacomo Govi (Fermi National Accelerator Lab. (US))22/05/2012, 17:50Software Engineering, Data Stores and Databases (track 5)ParallelData management for a wide category of non-event data plays a critical role in the operation of the CMS experiment. The processing chain (data taking-reconstruction-analysis) relies in the prompt availability of specific, time dependent data describing the state of the various detectors and their calibration parameters, which are treated separately from event data. The Condition Database...Go to contribution page
-
Linda Coney (University of California, Riverside)22/05/2012, 17:50The Muon Ionization Cooling Experiment (MICE) is designed to test transverse cooling of a muon beam, demonstrating an important step along the path toward creating future high intensity muon beam facilities. Protons in the ISIS synchrotron impact a titanium target, producing pions which decay into muons that propagate through the beam line to the MICE cooling channel. Along the beam line,...Go to contribution page
-
Dr Andrea Sciaba (CERN), Lothar A.T. Bauerdick (Fermi National Accelerator Lab. (US))22/05/2012, 17:50Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of...Go to contribution page
-
Fons Rademakers (CERN)23/05/2012, 08:30
-
Makoto Asai (SLAC National Accelerator Laboratory (US))23/05/2012, 09:00
-
Dr David South (DESY)23/05/2012, 09:30
-
Mr Jacek Becla (SLAC)23/05/2012, 10:30
-
Mr Andreas Joachim Peters (CERN)23/05/2012, 11:00
-
Philippe Galvez (California Institute of Technology (US))23/05/2012, 11:30
-
Adrian Pope (Argonne National Laboratory)24/05/2012, 08:30
-
Johan Messchendorp (University of Groningen)24/05/2012, 09:00
-
Mr Federico Carminati (CERN)24/05/2012, 09:30
-
Artur Jerzy Barczyk (California Institute of Technology (US))24/05/2012, 10:30
-
24/05/2012, 11:00
-
Tony Johnson (Nuclear Physics Laboratory)24/05/2012, 11:30
-
Dr Thomas Mc Cauley (Fermi National Accelerator Lab. (US))24/05/2012, 13:30The line between native and web applications is becoming increasingly blurred as modern web browsers are becoming powerful platforms on which applications can be run. Such applications are trivial to install and are readily extensible and easy to use. In an educational setting, web applications permit a way to rapidly deploy tools in a highly-restrictive computing environment. The I2U2...Go to contribution page
-
Niko Neufeld (CERN), Vijay Kartik Subbiah (CERN)24/05/2012, 13:30This contribution describes the design and development of a fully software-based Online test-bench for LHCb. The current “Full Experiment System Test” (FEST) is a programmable data injector with a test setup that runs using a simulated data acquisition (DAQ) chain. FEST is heavily used in LHCb by different groups, and thus the motivation for complete software emulation of the test-bench is to...Go to contribution page
-
Mr Gero Müller (III. Physikalisches Institut A, RWTH Aachen University, Germany)24/05/2012, 13:30To understand in detail cosmic magnetic fields and sources of Ultra High Energy Cosmic Rays (UHECRs) we have developed a Monte Carlo simulation for galactic and extragalactic propagation. In our approach we identify three different propagation regimes for UHECRs, the Milky Way, the local universe out to 110 Mpc, and the distant universe. For deflections caused by the Galactic magnetic field...Go to contribution page
-
Mr Matej Batic (Jozef Stefan Institute)24/05/2012, 13:30The Statistical Toolkit is an open source system specialized in the statistical comparison of distributions. It addresses requirements common to different experimental domains, such as simulation validation (e.g. comparison of experimental and simulated distributions), regression testing in software development and detector performance monitoring. The first development cycles concerned the...Go to contribution page
-
Dr Isidro Gonzalez Caballero (Universidad de Oviedo (ES))24/05/2012, 13:30The analysis of the complex LHC data usually follows a standard path that aims at minimizing not only the amount of data but also the number of observables used. After a number of steps of slimming and skimming the data, the remaining few terabytes of ROOT files hold a selection of the events and a flat structure for the variables needed that can be more easily inspected and traversed in the...Go to contribution page
-
Chris Bee (Universite d'Aix - Marseille II (FR))24/05/2012, 13:30The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We...Go to contribution page
-
Mario Lassnig (CERN)24/05/2012, 13:30The ATLAS Distributed Data Management system requires accounting of its contents at the metadata layer. This presents a hard problem due to the large scale of the system and the high rate of concurrent modifications of data. The system must efficiently account more than 80PB of disk and tape that store upwards of 500 million files across 100 sites globally. In this work a generic accounting...Go to contribution page
-
Luis Ignacio Lopera Gonzalez (Universidad de los Andes (CO))24/05/2012, 13:30Since 2009 when the LHC came back to active service, the Data Quality Monitoring (DQM) team was faced with the need to homogenize and automate operations across all the different environments within which DQM is used for data certification. The main goal of automation is to reduce operator intervention at the minimum possible level, especially in the area of DQM files management, where...Go to contribution page
-
Hee Seo (Hanyang Univ.)24/05/2012, 13:30Physics data libraries play an important role in Monte Carlo simulation systems: they provide fundamental atomic and nuclear parameters, and tabulations of basic physics quantities (cross sections, correction factors, secondary particle spectra etc.) for particle transport. This report summarizes recent efforts for the improvement of the accuracy of physics data libraries, concerning two...Go to contribution page
-
Ombretta Pinazza (Universita e INFN (IT))24/05/2012, 13:30ALICE is one of the four main experiments at the CERN Large Hadron Collider (LHC) in Geneva. The Alice Detector Control System (DCS) is responsible for the operation and monitoring of the 18 detectors of the experiment and of central systems, for collecting and managing alarms, data and commands. Furthermore, it is the central tool to monitor and verify the beam mode and conditions in order...Go to contribution page
-
Joerg Behr (Deutsches Elektronen-Synchrotron (DE))24/05/2012, 13:30The CMS all-silicon tracker consists of 16588 modules. Therefore its alignment procedures require sophisticated algorithms. Advanced tools of computing, tracking and data analysis have been deployed for reaching the targeted performance. Ultimate local precision is now achieved by the determination of sensor curvatures, challenging the algorithms to determine about 200k parameters...Go to contribution page
-
Matthew Littlefield (Brunel University)24/05/2012, 13:30The Mice Analysis User Software (MAUS) for the Muon Ionisation Cooling Experiment (MICE) is a new simulation and analysis framework based on best-practice software design methodologies. It replaces G4MICE as it offers new functionality and incorporates an improved design structure. A new and effective control and management system has been created for handling the simulation geometry within...Go to contribution page
-
Luca dell'Agnello (INFN-CNAF)24/05/2012, 13:30An automated virtual test environment is a way to improve testing, validation and verification activities when several deployment scenarios must be considered. Such solution has been designed and developed at INFN CNAF to improve software development life cycle and to optimize the deployment of a new software release (sometimes delayed for the difficulties met during the installation and...Go to contribution page
-
Dr John Harvey (CERN)24/05/2012, 13:30The PH/SFT group at CERN is responsible for developing, releasing and deploying some of the software packages used in the data processing systems of CERN experiments, in particular those at the LHC. They include ROOT, GEANT4, CernVM, Generator Services, and Multi-core R&D (http://sftweb.cern.ch/). We have already submitted a number of abstracts for oral presentations at the conference. Here we...Go to contribution page
-
Dr Jack Cranshaw (Argonne National Laboratory (US))24/05/2012, 13:30The ATLAS event-level metadata infrastructure supports applications that range from data quality monitoring, anomaly detection, and fast physics monitoring to event-level selection and navigation to file-resident event data at any processing stage, from raw through analysis object data, in globally distributed analysis. A central component of the infrastructure is a distributed TAG database,...Go to contribution page
-
Markus Frank (CERN)24/05/2012, 13:30The LHCb collaboration consists of roughly 700 physicists from 52 institutes and universities. Most of the collaborating physicists - including subdetector experts - are not permanently based at CERN. This paper describes the architecture used to publish data internal to the LHCb experiment control- and data acquisition system to the world wide web. Collaborators can access the online...Go to contribution page
-
Dr Domenico Giordano (CERN)24/05/2012, 13:30The conversion of photons into electron-positron pairs in the detector material is a nuisance in the event reconstruction of high energy physics experiments, since the measurement of the electromagnetic component of interaction products results degraded. Nonetheless this unavoidable detector effect can be also extremely useful. The reconstruction of photon conversions can be used to probe the...Go to contribution page
-
Jochen Meyer (Bayerische Julius Max. Universitaet Wuerzburg (DE))24/05/2012, 13:30Accurate and detailed descriptions of the HEP detectors are turning out to be crucial elements of the software chains used for simulation, visualization and reconstruction programs: for this reason, it is of paramount importance to dispose of and to deploy generic detector description tools which allow for precise modeling, visualization, visual debugging and interactivity and which can be...Go to contribution page
-
Daniela Remenska (NIKHEF (NL))24/05/2012, 13:30DIRAC is the Grid solution designed to support LHCb production activities as well as user data analysis. Based on a service-oriented architecture, DIRAC consists of many cooperating distributed services and agents delivering the workload to the Grid resources. Services accept requests from agents and running jobs, while agents run as light-weight components, fulfilling specific goals. Services...Go to contribution page
-
Julia Grebenyuk (DESY)24/05/2012, 13:30A many-parameter fit to extract the the proton structure functions from the Neutral Current deep-inelastic scattering cross sections, measured from the data collected at HERA ep-collider with the ZEUS detector, will be presented. The structure functions F_2 and F_L are extracted as a function of Bjorken-x in bins of virtuality Q2. The fit is performed with the Bayesian Analysis Toolkit (BAT)...Go to contribution page
-
Gennadiy Lukhanin (Fermi National Accelerator Lab. (US)), Martin Frank (UVA)24/05/2012, 13:30In the NOvA experiment, the Detector Controls System (DCS) provides a method for controlling and monitoring important detector hardware and environmental parameters. It is essential for operating the detector and is required to have access to roughly 370,000 independent programmable channels via more than 11,600 physical devices. In this paper, we demonstrate an application of Control...Go to contribution page
-
Prof. Swain John (Noreastern University)24/05/2012, 13:30Modern particle physics experiments use short pieces of code called ``triggers'' in order to make rapid decisions about whether incoming data represents potentially interesting physics or not. Such decisions are irreversible and while it is extremely important that they are made correctly, little use has been made in the community of formal verification methodology. The goal of this...Go to contribution page
-
Andrea Bocci (CERN)24/05/2012, 13:30The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented using FPGA and custom ASIC technology, and the High Level Trigger (HLT), implemented running a streamlined version of the CMS offline reconstruction software on a cluster of commercial rack-mounted computers, comprising thousands of CPUs. The design of a software trigger system requires a...Go to contribution page
-
Pauline Bernat (University College London (UK))24/05/2012, 13:30The rising instantaneous luminosity of the LHC poses an increasing challenge to the pattern recognition algorithms for track reconstruction at the ATLAS Inner Detector Trigger. We will present the performance of these algorithms in terms of signal efficiency, fake tracks and execution time, as a function of the number of proton-proton collisions per bunch-crossing, in 2011 data and in...Go to contribution page
-
Luiz Fernando Cagiano Parodi De Frias (Univ. Federal do Rio de Janeiro (BR))24/05/2012, 13:30In 2010, the LHC experiment produced 7 TeV and heavy-ions collisions continually, generating a huge amount of data, which was analyzed and reported throughout several performed studies. Since then, physicists are bringing out papers and conference notes announcing results and achievements. During 2010, 37 papers and 102 conference notes were published and until September 2011 there are already...Go to contribution page
-
Steven Andrew Farrell (Department of Physics)24/05/2012, 13:30The ATLAS data quality software infrastructure provides tools for prompt investigation of and feedback on collected data and propagation of these results to analysis users. Both manual and automatic inputs are used in this system. In 2011, we upgraded our framework to record all issues affecting the quality of the data in a manner which allows users to extract as much information (of the...Go to contribution page
-
Grigori Rybkin (Universite de Paris-Sud 11 (FR))24/05/2012, 13:30Software packaging is indispensable part of build and prerequisite for deployment processes. Full ATLAS software stack consists of TDAQ, HLT, and Offline software. These software groups depend on some 80 external software packages. We present tools, package PackDist, developed and used to package all this software except for TDAQ project. PackDist is based on and driven by CMT, ATLAS software...Go to contribution page
-
Steven Goldfarb (University of Michigan (US))24/05/2012, 13:30The newfound ability of Social Media to transform public communication back to a conversational nature provides HEP with a powerful tool for Outreach and Communication. By far, the most effective component of nearly any visit or public event is that fact that the students, teachers, media, and members of the public have a chance to meet and converse with real scientists. While more than...Go to contribution page
-
Jochen Ulrich (Johann-Wolfgang-Goethe Univ. (DE))24/05/2012, 13:30The High-Level-Trigger (HLT) cluster of the ALICE experiment is a computer cluster with about 200 nodes and 20 infrastructure machines. In its current state, the cluster consists of nearly 10 different configurations of nodes in terms of installed hardware, software and network structure. In such a heterogeneous environment with a distributed application, information about the actual...Go to contribution page
-
Pierrick Hanlet (Illinois Institute of Technology)24/05/2012, 13:30The Muon Ionization Cooling Experiment (MICE) is a demonstration experiment to prove the feasibility of cooling a beam of muons for use in a Neutrino Factory and/or Muon Collider. The MICE cooling channel is a section of a modified Study II cooling channel which will provide a 10% reduction in beam emittance. In order to ensure a reliable measurement, MICE will measure the beam emittance...Go to contribution page
-
Alexander Oh (University of Manchester (GB))24/05/2012, 13:30The online event selection is crucial to reject most of the events containing uninteresting background collisions while preserving as much as possible the interesting physical signals. The b-jet selection is part of the trigger strategy of the ATLAS experiment and a set of dedicated triggers is in place from the beginning of the 2011 data-taking period and is contributing to keep the total...Go to contribution page
-
Marius Tudor Morar (University of Manchester (GB))24/05/2012, 13:30The ATLAS High Level Trigger (HLT) is organized in two trigger levels running different selection algorithms on heterogeneous farms composed of off-the-shelf processing units. The processing units have varying computing power and can be integrated using diverse network connectivity. The ATLAS working conditions are changing mainly due to the constant increase of the LHC instantaneous...Go to contribution page
-
Dr Daniel Kollar (Max-Planck-Institut fuer Physik, Munich)24/05/2012, 13:30The main goals of data analysis are to infer the parameters of models from data, to draw conclusions on the validity of models, and to compare their predictions allowing to select the most appropriate model. The Bayesian Analysis Toolkit, BAT, is a tool developed to evaluate the posterior probability distribution for models and their parameters. It is centered around Bayes' Theorem and...Go to contribution page
-
Prof. Kihyeon Cho (KISTI)24/05/2012, 13:30In order to search for new physics beyond the standard model, the next generation of B-factory experiment, Belle II will collect a huge data sample that is a challenge for computing systems. The Belle II experiment, which should commence data collection in 2015, expects data rates 50 times higher than that of Belle. In order to handle this amount of data, we need a new data handling system...Go to contribution page
-
Soohyung Lee (Korea University)24/05/2012, 13:30A next generation B-factory experiment, Belle II, is now being constructed at KEK in Japan. The upgraded accelerator SuperKEKB is designed to have the maximum luminosity of 8 × 10^35 cm^−2s^−1 that is a factor of 40 higher than the current world record. As a consequence, the Belle II detector yields a data stream of the event size ~1 MB at a Level 1 rate of 30 kHz. The Belle II High Level...Go to contribution page
-
Benedikt Hegner (CERN)24/05/2012, 13:30Bug tracking is a process which comprises activities of reporting, documenting, reviewing, planning, and fixing software bugs. While there exist many studies on the usage of bug tracking tools and procedures in open source software, the situation in high energy physics has never been looked at in a systematic way. In our study we have compared and analyzed several scientific and...Go to contribution page
-
Karol Hennessy (Liverpool)24/05/2012, 13:30The LHCb experiment is dedicated to searching for New Physics effects in the heavy flavour sector, precise measurements of CP violation and rare heavy meson decays. Precise tracking and vertexing around the interaction point is crucial in achieving these physics goals. The LHCb VELO (VErtex LOcator) silicon micro-strip detector is the highest precision vertex detector at the LHC and is...Go to contribution page
-
Dr Shengsen Sun (Institute of High Energy Physics Chinese Academy of Scinences)24/05/2012, 13:30The BESIII TOF detector system based on plastic scintillation counters consists of a double layer barrel and two single layer end caps. With the time calibration, the double-layer barrel TOF achieved 78ps time resolution for electrons, and end cap is about 110ps for muons. The attenuation length, effective velocity calibrations and TOF reconstruction are also described. The Kalman filter...Go to contribution page
-
Marek Domaracky (CERN)24/05/2012, 13:30Over the last few years, we have seen the broadcast industry moving to mobile devices and to the broadband Internet delivering HD quality. To keep up with the trends, we deployed a new streaming infrastructure. We are now delivering live and on-demand video to all major platforms like Windows, Linux, Mac, iOS and Android running on PC, Smart Phone, Tablet or TV. To optimize the viewing...Go to contribution page
-
Mariusz Piorkowski24/05/2012, 13:30Oracle-based database applications underpin many key aspects of operations for both the LHC accelerator and the LHC experiments. In addition to overall performance, predictability of response is a key requirement to ensure smooth operations—and delivering predictability requires understanding the applications from the ground up. Fortunately, the Oracle database management system provides...Go to contribution page
-
Frank-Dieter Gaede (Deutsches Elektronen-Synchrotron (DE))24/05/2012, 13:30ILD is a proposed detector concept for a future linear collider, that envisages a Time Projection Chamber (TPC) as the central tracking detector. The ILD TPC will have a large number of voxels that have dimensions that are small compared to the typical distances between charged particle tracks. This allows for the application of simple nearest neighbor type clustering algorithms to find clean...Go to contribution page
-
Evaldas Juska (Fermi National Accelerator Lab. (US))24/05/2012, 13:30Cathode strip chambers (CSC) compose the endcap muon system of the CMS experiment at the LHC. Two years of data taking have proven that various online systems like Detector Control System (DCS), Data Quality Monitoring (DQM), Trigger, Data Acquisition (DAQ) and other specialized applications are doing their task very well. But the need for better integration between these systems is starting...Go to contribution page
-
Kaori Maeshima (Fermi National Accelerator Lab. (US))24/05/2012, 13:30In operating a complex high energy physics experiment such as CMS, two of the important issues are to record high quality data as efficiently as possible and, correspondingly, to have well validated and certified data in a timely manner for physics analyses. Integrated and user-friendly monitoring systems and coherent information flow play an important role to accomplish this. The CMS...Go to contribution page
-
Giacomo Sguazzoni (Universita e INFN (IT))24/05/2012, 13:30The CMS tracking code is organized in several levels, known as 'iterative steps', each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and...Go to contribution page
-
Sunanda Banerjee (Saha Institute of Nuclear Physics (IN))24/05/2012, 13:30The CMS simulation, based on the Geant4 toolkit, has been operational within the new CMS software framework for more than four years. The description of the detector including the forward regions has been completed and detailed investigation of detector positioning and material budget has been carried out using collision data. Detailed modelling of detector noise has been performed and...Go to contribution page
-
Dirk Hufnagel (Fermi National Accelerator Lab. (US))24/05/2012, 13:30The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It is responsible for the first processing steps of data from the CMS Experiment at CERN. This talk covers the complete overhaul (rewrite) of the system for the 2012 run, to bring it into line with the new CMS Workload Management system, improving scalability and maintainability for the next few years.Go to contribution page
-
Elizabeth Gallas (University of Oxford (GB))24/05/2012, 13:30In the ATLAS experiment, database systems generally store the bulk of conditions and configuration data needed by event-wise reconstruction and analysis jobs. These systems can be relatively large stores of information, organized and indexed primarily to store all information required for system-specific use cases and efficiently deliver the required information to event-based...Go to contribution page
-
Andrea Bocci (CERN)24/05/2012, 13:30The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented using FPGA and custom ASIC technology, and the High Level Trigger (HLT), implemented running a streamlined version of the CMS offline reconstruction software on a cluster of commercial rack-mounted computers, comprising thousands of CPUs. The CMS software is written mostly in C++, using...Go to contribution page
-
Dr Martin Purschke (BROOKHAVEN NATIONAL LABORATORY)24/05/2012, 13:30The PHENIX detector system at the Relativistic Heavy Ion Collider (RHIC) was one of the first experiments getting to "LHC-era" data rates in excess of 500 MB/s of compressed data in 2004. In step with new detectors and increasing event sizes and rates, the data logging capability has grown to about 1500MB/s since then. We will explain the strategies we employ to cope with the data volumes...Go to contribution page
-
Jorn Adamczewski-Musch (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))24/05/2012, 13:30The Compressed Baryonic Matter (CBM) experiment is intended to run at the FAIR facility that is currently being build at GSI in Darmstadt, Germany. For testing of future CBM detector and readout electronics prototypes, several test beamtimes have been performed at different locations, such as GSI, COSY, and CERN PS. The DAQ software has to treat various data inputs, e.g. standard VME modules...Go to contribution page
-
Manuel Giffels (CERN)24/05/2012, 13:30The Data Bookkeeping Service 3 (DBS 3) provides an improved event data catalog for Monte Carlo and recorded data of the CMS (Compact Muon Solenoid) experiment at the Large Hadron Collider (LHC). It provides the necessary information used for tracking datasets, like data processing history, files and runs associated with a given dataset on a scale of about 10^5 datasets and more than 10^7...Go to contribution page
-
Matthias Richter (University of Oslo (NO))24/05/2012, 13:30High resolution detectors in high energy nuclear physics deliver a huge amount of data which is often a challenge for the data acquisition and mass storage. Lossless compression techniques on the level of the raw data can provide compression ratios up to a factor of 2. In ALICE, an effective compression factor of >5 for the Time Projection Chamber (TPC) is needed to reach an overall...Go to contribution page
-
Charilaos Tsarouchas (National Technical Univ. of Athens (GR))24/05/2012, 13:30The ATLAS experiment at CERN is one of the four Large Hadron Collider ex- periments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parame- ters, the propagation of the alarms and the archiving of important operational data in a relational database. DCS Data Viewer (DDV) is an application that provides access...Go to contribution page
-
Yu.nakahama Higuchi (CERN)24/05/2012, 13:30The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS detector has an average recording rate of about 300 Hz. To reduce the rate of events but still a maintain high efficiency of selecting rare events such as Higgs Boson decays, a three-level trigger system is used in ATLAS. Events are selected based on physics signatures such as events with energetic leptons,...Go to contribution page
-
Mantas Stankevicius (Vilnius University (LT))24/05/2012, 13:30CMSSW (CMS SoftWare) is the overall collection of software and services needed by the simulation, calibration and alignment, and reconstruction modules that process data so that physicists can perform their analysie. It is a long term project, with a large amount of source code. In large scale and complex projects is important to have as up-to-date and automated software documentation as...Go to contribution page
-
Andrea Petrucci (CERN)24/05/2012, 13:30The Error and Alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN is successfully used for the physics runs at Large Hadron Collider (LHC) during the first three years of activities. Error and alarm processing entails the notification, collection, store and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a...Go to contribution page
-
Ms Chang Pi-Jung (Kansas University)24/05/2012, 13:30The Double Chooz experiment will measure reactor antineutrino flux from two detectors with a relative normalization uncertainty less than 0.6%. The Double Chooz physical environment monitoring system records conditions of the experiment's environment to ensure the stability of the active volume and readout electronics. The system monitors temperatures in the detector liquids, temperatures and...Go to contribution page
-
24/05/2012, 13:30
-
Tomasz Wolak (CERN)24/05/2012, 13:30The development and distribution of Grid middleware software projects, as large, complex, distributed systems require a sizeable computing infrastructure for each stage of the software process: for instance pools of machines for building, and testing on several platforms. Software testing and the possibility of implementing realistic scenarios for the verification of grid middleware are a...Go to contribution page
-
Semen Lebedev (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))24/05/2012, 13:30The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and...Go to contribution page
-
Dr Dmitry Litvintsev (Fermilab)24/05/2012, 13:30Enstore is a mass storage system developed by Fermilab that provides distributed access and management of the data stored on tapes. It uses namespace service, pnfs, developed by DESY to provide filesystem-like view of the stored data. Pnfs is a legacy product and is being replaced by a new implementation, called Chimera, which is also developed by DESY. Chimera namespace offers multiple...Go to contribution page
-
Igor Oya (Institut für Physik, Humboldt-Universität zu Berlin, Newtonstrasse 15, D-12489 Berlin, Germany)24/05/2012, 13:30CTA (Cherenkov Telescope Array) is one of the largest ground-based astronomy projects being pursued and will be the largest facility for ground-based gamma-ray observations ever built. CTA will consist of two arrays (one in the Northern hemisphere and one in the Southern hemisphere) composed of several different sizes of telescopes. A prototype for the Medium Size Telescope (MST) type of a...Go to contribution page
-
Liam Duguid (University of London (GB))24/05/2012, 13:30The electron and photon triggers are among the most widely used triggers in ATLAS physics analyses. In 2011, the increasing luminosity and pile-up conditions demanded higher and higher thresholds and the use of tighter and tighter selections for the electron triggers. Optimizations were performed at all three levels of the ATLAS trigger system. At the high-level trigger (HLT), many variables...Go to contribution page
-
John Haggerty (Brookhaven National Laboratory)24/05/2012, 13:30The architecture of the PHENIX data acquisition system will be reviewed, and how it has evolved in 12 years of operation. Custom data acquisition hardware front end modules embedded in the detector operated in a largely inaccessible experimental hall have been controlled and monitored, and a large software infrastructure has been developed around remote objects which are controlled from a...Go to contribution page
-
Dr Alexander Undrus (Brookhaven National Laboratory (US))24/05/2012, 13:30The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing...Go to contribution page
-
Alvaro Gonzalez Alvarez (CERN)24/05/2012, 13:30In 2002, the first central CERN service for version control based on CVS was set up. Since then, three different services based on CVS and SVN have been launched and run in parallel; there are user requests for another service based on git. In order to ensure that the most demanded services are of high quality in terms of performance and reliability, services in less demand had to be shut...Go to contribution page
-
Marius Tudor Morar (University of Manchester (GB))24/05/2012, 13:30The ATLAS experiment is observing proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted rate of several hundred Hz, for an average event size of ~1.2 MB. This paper focuses on the TDAQ...Go to contribution page
-
Norman Anthony Graf (SLAC National Accelerator Laboratory (US))24/05/2012, 13:30Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in...Go to contribution page
-
Wolfgang Lukas (University of Innsbruck (AT))24/05/2012, 13:30We present the ATLAS simulation packages ATLFAST-II and ISF. Atlfast-II is a sophisticated fast parametrized simulation in the Calorimeter system in combination with full Geant4 simulation precision in the Inner Detector and Muon Systems. This combination offers a relative increase in speed of around a factor of ten compared to the standard ATLAS detector simulation and is being used to...Go to contribution page
-
Rahmat Rahmat (University of Mississippi (US))24/05/2012, 13:30A framework for Fast Simulation of particle interactions in the CMS detector has been developed and implemented in the overall simulation, reconstruction and analysis framework of CMS. It produces data samples in the same format as the one used by the Geant4-based (henceforth Full) Simulation and Reconstruction chain; the output of the Fast Simulation of CMS can therefore be used in the...Go to contribution page
-
Gennaro Tortone (INFN Napoli)24/05/2012, 13:30The FAZIA project groups together several institutions in Nuclear Physics, which are working in the domain of heavy-ion induced reactions around and below the Fermi energy. The aim of the project is to build a 4Pi array for charged particles, with high granularity and good energy resolution, with A and Z identification capability over the widest possible range. It will use the...Go to contribution page
-
Alfonso Boiano (INFN)24/05/2012, 13:30FAZIA stands for the Four Pi A and Z Identification Array. This is a project which aims at building a new 4pi particle detector for charged particles. It will operate in the domain of heavy-ion induced reactions around the Fermi energy. It puts together several international institutions in Nuclear Physics. It is planned to be operating with both stable and radioactive nuclear beams. A...Go to contribution page
-
Ms Heather Kelly (SLAC National Accelerator Laboratory)24/05/2012, 13:30The Fermi Gamma-ray Observatory, including the Large Area Telescope (LAT), was launched June 11, 2008. We are a relatively small collaboration, with a maximum of 25 software developers in our heyday. Within the LAT collaboration we support Redhat Linux, Windows, and are moving towards Mac OS as well for offline simulation, reconstruction and analysis tools. Early on it was decided to use...Go to contribution page
-
Elizabeth Gallas (University of Oxford (GB))24/05/2012, 13:30The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. In this paper we will discuss the primary use of AMI which is to provide a catalogue of datasets (file collections) which is searchable using...Go to contribution page
-
Dinesh Ram (Johann-Wolfgang-Goethe Univ. (DE))24/05/2012, 13:30The ALICE High-Level Trigger (HLT) is a complex real-time system, whose primary objective is to scale down the data volume read out by the ALICE detectors to at most 4 GB/sec before being written to permanent storage. This can be achieved by using a combination of event filtering, selection of the physics regions of interest and data compression, based on detailed on-line event reconstruction....Go to contribution page
-
Francisca Garay Walls (University of Edinburgh (GB))24/05/2012, 13:30An overview of the current status of electromagnetic physics (EM) of the Geant4 toolkit is presented. Recent improvements are focused on the performance of large scale production for LHC and on the precision of simulation results over a wide energy range. Significant efforts have been made to improve the accuracy and CPU speed for EM particle transport. New biasing options available for Geant4...Go to contribution page
-
Mr Laurent Garnier (LAL-IN2P3-CNRS)24/05/2012, 13:30New developments on visualization drivers in Geant4 software toolkitGo to contribution page
-
Dr Sebastien Binet (LAL/IN2P3)24/05/2012, 13:30Current HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a 'single-thread' processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. Writing scalable code in C++ for multicore architectures, while doable, is no...Go to contribution page
-
Jacob Russell Howard (University of Oxford (GB))24/05/2012, 13:30One possible option for the ATLAS High-Level Trigger (HLT) upgrade for higher LHC luminosity is to use GPU-accelerated event processing. In this talk we discuss parallel data preparation and track finding algorithms specifically designed to run on GPUs. We present a "client-server" solution for hybrid CPU/GPU event reconstruction which allows for the simple and flexible integration of...Go to contribution page
-
Dr Andrea Valassi (CERN)24/05/2012, 13:30The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier/Squid and...Go to contribution page
-
Dr Giovanni Polese (CERN)24/05/2012, 13:30The CMS detector control system (DCS) is responsible for controlling and monitoring the detector status and for the operation of all CMS sub detectors and infrastructure. This is required to ensure safe and efficient data taking, so that high quality physics data can be recorded. The current system architecture is composed of more than 100 servers, in order to provide the required processing...Go to contribution page
-
Takeo Higuchi (KEK)24/05/2012, 13:30We present performance study of a high-speed RocketIO receiver card implemented as PCI-express device intended for the use in future luminosity-frontier HEP experiment. To search for a new physics beyond the Standard Model, we start Belle II experiment from 2015 in KEK, Japan. In Belle II, the detector signals are digitized in or nearby the detector complex, and the digitized signals...Go to contribution page
-
Dr Giuseppe Avolio (University of California Irvine (US))24/05/2012, 13:30The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 2000...Go to contribution page
-
Mr Ivan BELYAEV (ITEP/MOSCOW)24/05/2012, 13:30A hybrid C++/Python environment built from the standard components is being heavily and successfully used in LHCb, both for off-line physics analysis as well as for the High Level Trigger. The approach is based on the LoKi toolkit and the Bender analysis framework. A small set of highly configurable C++ components allows to describe the most frequirent analysis tasks, e.g. combining and...Go to contribution page
-
Jonathan Bouchet (Kent State University)24/05/2012, 13:30Due to their production at the early stages, heavy flavor particles are of interest to study the properties of the matter created in heavy ion collisions at RHIC. Previous measurements of $D$ and $B$ mesons at RHIC[1, 2] using semi-leptonic probes show a suppression similar to that of light quarks, which is in contradiction with theoretical models only including gluon radiative energy loss...Go to contribution page
-
Dr Douglas Smith (SLAC National Accelerator Lab.)24/05/2012, 13:30The BaBar high energy physics experiment acquired data from 1999 until 2008. Soon after the end of data taking, the effort to produce the final dataset started. This final dataset contains over 11x10^9 events, in 1.6x10^6 files, over a petabyte of storage. The Long Term Data Access (LTDA) project aims at the preservation of the BaBar data, analysis tools and documentation to ensure the...Go to contribution page
-
Mr Igor Mandrichenko (Fermilab)24/05/2012, 13:30Neutrino physics research is an important part of FNAL scientific program in post Tevatron era. Neutrino experiments are taking advantage of high beam intensity delivered by the FNAL accelerator complex. These experiments share a common beam infrastructure, and require detailed information about the operation of the beam to perform their measurements. We have designed and implemented a...Go to contribution page
-
Prof. Ryosuke ITOH (KEK)24/05/2012, 13:30Recent PC servers are equipped with multi-core CPUs and it is desired to utilize the full processing power of them for the data analysis in large scale HEP experiments. A software framework ``basf2'' is being developed for the use in the Belle II experiment, an upgraded B-factory experiment at KEK, and the parallel event processing is in its design. The framework accepts a set of plug-in...Go to contribution page
-
Dr Julius Hrivnac (Universite de Paris-Sud 11 (FR))24/05/2012, 13:30The possible implementation of parallel algorithms will be described. - The functionality will be demonstrated using Swarm - a new experimental interactive parallel framework. - The access from several parallel-friendly scripting languages will be shown. - The benchmarks of the typical tasks used in High Energy Physics code will be provided. The talk will concentrate on using the "Fork and...Go to contribution page
-
Mr Philippe Canal (FERMILAB)24/05/2012, 13:30In the past year, the development of ROOT I/O has focused on improving the existing code and increasing the collaboration with the experiments' experts. Regular I/O workshops have been held to share and build upon the varied experiences and points of view. The resulting improvements in ROOT I/O span many dimensions including reduction and more control over the memory usage, drastic reduction...Go to contribution page
-
Dr John Apostolakis (CERN), Xin Dong (Northeastern University)24/05/2012, 13:30We report on the progress of the multi-core versions of Geant4, including multi-process and multi-threaded Geant4. The performance of the multi-threaded version of Geant4 has been measured, identifying an overhead compared with the sequential version of 20-30%. We explain the reasons, and the improvements introduced to reduce this overhead. In addition we have improved the design of a...Go to contribution page
-
Irina Sourikova (Brookhaven National Laboratory)24/05/2012, 13:30During its 20 years of R&D, construction and operation the Phenix experiment at RHIC has accumulated large amounts of proprietary collaboration data that is hosted on many servers around the world and is not open for commercial search engines for indexing and searching.The legacy search infrastructure did not scale well with the fast growing Phenix document base and produced results...Go to contribution page
-
Danilo Dongiovanni (INFN), Doina Cristina Aiftimiei (Istituto Nazionale Fisica Nucleare (IT))24/05/2012, 13:30What is an EMI Release? What is its life-cycle? How is its quality assured through a continuous integration and large scale acceptance testing? These are the main questions that this article will answer, by presenting the EMI release management process with emphasis on the role played by the Testing Infrastructure in improving the quality of the middleware provided by the project. The...Go to contribution page
-
Simon William Fayer (Imperial College Sci., Tech. & Med. (GB)), Stuart Wakefield (Imperial College Sci., Tech. & Med. (GB))24/05/2012, 13:30The density of rack-mount computers is continually increasing, allowing for higher performance processing in smaller and smaller spaces. With the introduction of its new Bulldozer micro-architecture, AMD have made it feasible to run up to 128 cores within a 2U rack-mount space. CPUs based on Bulldozer contain a series of modules, each module containing two processing cores which share some...Go to contribution page
-
Igor Kulakov (GSI)24/05/2012, 13:30Search for particle trajectories is a basis of the on-line event reconstruction in the heavy-ion CBM experiment (FAIR/GSI, Darmstadt, Germany). The experimental requirements are very high, namely: up to 10^7 collisions per second, up to 1000 charged particles produced in a central collision, a non-homogeneous magnetic field, about 85% of the additional background combinatorial measurements in...Go to contribution page
-
Dr Thomas Mc Cauley (Fermi National Accelerator Lab. (US))24/05/2012, 13:30iSpy is a general-purpose event data and detector visualization program that was developed as an event display for the CMS experiment at the LHC and has seen use by the general public and teachers and students in the context of education and outreach. Central to the iSpy design philosophy is ease of installation, use, and extensibility. The application itself uses the open-access packages...Go to contribution page
-
Riccardo Di Sipio (Universita e INFN (IT))24/05/2012, 13:30Jigsaw provides a collection of tools for high-energy physics analyses. In Jigsaw's paradigm input data, analyses and histograms are factorized so that they can be configured and put together at run-time to give more flexibility to the user. Analyses are focussed on physical objects such as particles and event shape quantities. These are distilled from the input data and brought to the...Go to contribution page
-
Norman Anthony Graf (SLAC National Accelerator Laboratory (US))24/05/2012, 13:30LCIO is a persistency framework and event data model which, as originally presented at CHEP 2003, was developed for the next linear collider physics and detector response simulation studies. Since then, the data model has been extended to also incorporate raw data formats as well as reconstructed object classes. LCIO defines a common abstract user interface (API) and is designed to be...Go to contribution page
-
Norman Anthony Graf (SLAC National Accelerator Laboratory (US))24/05/2012, 13:30slic: Geant4 simulation program As the complexity and resolution of particle detectors increases, the need for detailed simulation of the experimental setup also increases. Designing experiments requires efficient tools to simulate detector response and optimize the cost-benefit ratio for design options. We have developed efficient and flexible tools for detailed physics and...Go to contribution page
-
Oskar Wyszynski (Jagiellonian University (PL))24/05/2012, 13:30Shine is the new offline software framework of the NA61/SHINE experiment at the CERN SPS for data reconstruction, analysis and visualization as well as detector simulation. To allow for a smooth migration to the new framework, as well as to facilitate its validation, our transition strategy foresees to incorporate considerable parts of the old NA61/SHINE reconstruction chain which is based on...Go to contribution page
-
Alain Roy (University of Wisconsin-Madison)24/05/2012, 13:30We recently completed a significant transition in the Open Science Grid in which we moved our software distribution mechanism from the useful but niche system called Pacman to a community-standard native packaged system (RPM). Despite the challenges, this migration was both useful and necessary. In this paper we explore some of the lessons learned during this transition, lessons which we...Go to contribution page
-
Mr SON HOANG (University of Houston)24/05/2012, 13:30In the quest to develop a Space Radiation Dosimeter based on the Timepix chip from Medipix2 Collaboration, the fundamental issue is how Dose and Dose-equivalent can be extracted from the raw Timepix outputs. To calculate the Dose-equivalent, each type of potentially incident radiation is given a Quality Factor, also referred to as Relative Biological Effectiveness (RBE). As proposed in the...Go to contribution page
-
Illya Shapoval (CERN, KIPT)24/05/2012, 13:30The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database...Go to contribution page
-
Valentin Kuznetsov (Cornell University)24/05/2012, 13:30The recent buzzword in IT world is NoSQL. Major players, such as Facebook, Yahoo, Google, etc. are widely adopted different "NoSQL" solutions for their needs. Horizontal scalability, flexible data model and management of big data volumes are only a few advantages of NoSQL. In CMS experiment we use several of them in production environment. Here we present CMS projects based on NoSQL solutions,...Go to contribution page
-
Dr Daniel DeTone (University of Michigan)24/05/2012, 13:30Communication and collaboration using stored digital media has recently garnered increasing interest in many facets of business, government and education. This is primarily due to improvements in the quality of cameras and the speed of computers. Digital media serves as an effective alternative in the absence of physical interaction between multiple individuals. Video recordings that allow...Go to contribution page
-
Jakob Lettenbichler (HEPHY Vienna, Austria), Moritz Nadler, Rudi Frühwirth (Institut fuer Hochenergiephysik (HEPHY))24/05/2012, 13:30The Silicon Vertex Detector (SVD) of the Belle II experiment is a newly developed device with four measurement layers. Track finding in the SVD will be done both in conjunction with the Central Drift Chamber and in stand-alone mode. The reconstruction of very-low-momentum tracks in stand-alone mode is a big challenge, especially in view of the low redundancy and the large expected...Go to contribution page
-
Prof. Sudhir Malik (University of Nebraska-Lincoln)24/05/2012, 13:30Since 2009, the CMS experiment at LHC has provided an intensive training on the use of Physics Analysis Tools (PAT), a collection of common analysis tools designed to share expertise and maximise the productivity in the physics analysis. More than ten one-week courses preceded by prerequisite studies have been organized and the feedback from the participants has been carefully analysed. This...Go to contribution page
-
Diogo Raphael Da Silva Di Calafiori (Eidgenoessische Tech. Hochschule Zuerich (CH))24/05/2012, 13:30This paper presents the current architecture of the control and safety systems designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). A complete evaluation of both systems performance during all CMS physics data taking periods is reported, with emphasis on how software and hardware solutions have...Go to contribution page
-
Anton Topurov (CERN)24/05/2012, 13:30As elsewhere in today’s computing environment, virtualisation is becoming prevalent in the database management area where HEP laboratories, and industry more generally, seek to deliver improved services whilst simultaneously increasing efficiency. We present here our solutions for the effective management of virtualised databases, building on over five years of experience dating back to...Go to contribution page
-
Mateusz Lechman (CERN)24/05/2012, 13:30ALICE (A Large Ion Collider Experiment) is one of the big LHC (Large Hadron Collider) experiments at CERN in Geneve, Switzerland. The experiment is composed of 18 sub-detectors controlled by an integrated Detector Control System (DCS) that is implemented using the commercial SCADA package PVSS. The DCS includes over 1200 network devices, over 1,000,000 input channels and numerous custom...Go to contribution page
-
Michael Jackson (EPCC)24/05/2012, 13:30Within the Muon Ionization Cooling Experiment (MICE), the MICE Analysis User Software (MAUS) framework performs both online analysis of live data and detailed offline data analysis, simulation, and accelerator design. The MAUS Map-Reduce API parallelizes computing in the control room, ensures that code can be run both offline and online, and displays plots for users in an easily extendable...Go to contribution page
-
Witold Pokorski (CERN)24/05/2012, 13:30In this paper we present a new tool for tuning and validation of Monte Carlo (MC) generators, essential in order to have predictive power in the area of high-energy physics (HEP) experiments. With the first year of LHC data being now analyzed, the need for reliable MC generators is very clear. The tool, called MCPLOTS, is composed of a browsable repository of plots comparing HEP event...Go to contribution page
-
Norman Anthony Graf (SLAC National Accelerator Laboratory (US))24/05/2012, 13:30The ability to directly import CAD geometries into Geant4 is an often requested feature, despite the recognized limitations of the difficulty in accessing proprietary formats, the mismatch between level of detail in producing a part and simulating it, the often disparate approaches to parent-child relationships and the difficulty in maintaining or assigning material definitions to...Go to contribution page
-
Andrew Haas (SLAC National Accelerator Laboratory)24/05/2012, 13:30We are now in a regime where we observe substantial multiple proton-proton collisions within each filled LHC bunch-crossing and also multiple filled bunch-crossings within the sensitive time window of the ATLAS detector. This will increase with increased luminosity in the near future. Including these effects in Monte Carlo simulation poses significant computing challenges. We present a...Go to contribution page
-
Dr Andreas Wildauer (Universidad de Valencia (ES)), Federico Meloni (Università degli Studi e INFN Milano (IT)), Kirill Prokofiev (New York University (US)), Simone Pagan Griso (Lawrence Berkeley National Lab. (US))24/05/2012, 13:30Presented in this contribution are methods currently developed and used by the ATLAS collaboration to measure the performance of the primary vertex reconstruction algorithms. These methods quantify the amount of additional pile up interactions and help to identify the hard scattering process (the so called primary vertex) in the proton-proton collisions with high accuracy. The correct...Go to contribution page
-
Salvatore Di Guida (CERN)24/05/2012, 13:30With LHC producing collisions at larger and larger luminosity, CMS must be able to take high quality data and process them reliably: these tasks need not only correct conditions, but also that those datasets must be promptly available. The CMS condition infrastructure relies on many different pieces, such as hardware, networks, and services, which must be constantly monitored, and any faulty...Go to contribution page
-
Dr Andrea Valassi (CERN)24/05/2012, 13:30The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier/Squid and...Go to contribution page
-
Marian Babik (CERN)24/05/2012, 13:30Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is a Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via...Go to contribution page
-
Hege Austrheim Erdal (Bergen University College (NO))24/05/2012, 13:30ALICE (A Large Ion Collider Experiment) is a dedicated heavy ion experiment at the Large Hadron Collider (LHC). The High Level Trigger (HLT) for ALICE is a powerful, sophisticated tool aimed at compressing the data volume and filtering events with desirable physics content. Several of the major detectors in ALICE are incorporated into HLT to compute real-time event reconstruction, for...Go to contribution page
-
Mr Andres Abad Rodriguez (CERN)24/05/2012, 13:30One of the major goals of the EMI (European Middleware Initiative) project is the integration of several components of the pre-existing middleware (ARC, gLite, UNICORE and dCache) into a single consistent set of packages with uniform distributions and repositories. Those individual middleware projects have been developed in the last decade by tens of development teams and before EMI were all...Go to contribution page
-
Joao Antunes Pequenao (Lawrence Berkeley National Lab. (US))24/05/2012, 13:30New types of hardware, like smartphones and tablets, are becoming more available, affordable and popular in the market. Furthermore with the advent of Web2.0 frameworks, Web3D and Cloud computing, the way we interact, produce and exchange content is being dramatically transformed. How can we take advantage of these technologies to produce engaging applications which can be conveniently used...Go to contribution page
-
Dr David Lawrence (Jefferson Lab)24/05/2012, 13:30The JANA framework has been deployed and in use since 2007 for development of the GlueX experiment at Jefferson Lab. The multi-threaded reconstruction framework is routinely used on machines with up to 32 cores with excellent scaling. User feedback has also helped to develop JANA into a user-friendly environment for development of reconstruction code and event playback. The basic design of...Go to contribution page
-
Alja Mrak Tadel (Univ. of California San Diego (US)), Matevz Tadel (Univ. of California San Diego (US))24/05/2012, 13:30Fireworks, the event-display program of CMS, was extended with an advanced geometry visualization package. ROOT's TGeo geometry is used as internal representation, shared among several geometry views. Each view is represented by a GUI list-tree widget, implemented as a flat vector to allow for fast searching, selection, and filtering by material type, node name, and shape type. Display of...Go to contribution page
-
Timur Pocheptsov (Joint Inst. for Nuclear Research (RU))24/05/2012, 13:30ROOT's graphics works mainly via the TVirtualX class (this includes both GUI and non-GUI graphics). Currently, TVirtualX has two native implementations based on the X11 and Win32 low-level APIs. To make the X11 version work on OS X we have to install the X11 server (an additional application), but unfortunately, there is no X11 for iOS and so no graphics for mobile devices from Apple -...Go to contribution page
-
Andreas Salzburger (CERN), Giacinto Piacquadio (CERN)24/05/2012, 13:30The read-out from individual pixels on planar semi-conductor sensors are grouped into clusters to reconstruct the location where a charged particle passed through the sensor. The resolution given by individual pixel sizes is significantly improved by using the information from the charge sharing between pixels. Such analog cluster creation techniques have been used by the ATLAS...Go to contribution page
-
Mr Felix Valentin Böhmer (Technische Universität München)24/05/2012, 13:30GENFIT is a framework for track fitting in nuclear and particle physics. Its defining feature is the conceptual independence of the specific detector and field geometry, achieved by modular design of the software. A track in genfit is a collection of detector hits and a collection of track representations.It can contain hits from different detector types (planar hits, space points,...Go to contribution page
-
Irakli Chakaberia (Kansas State University)24/05/2012, 13:30The rate of performance improvements of the LHC at CERN has had strong influence on the characteristics of the monitoring tools developed for the experiments. We present some of the latest additions to the suite of Web Based Monitoring services for the CMS experiment, and explore the aspects that address the roughly 20-fold increase in peak instantaneous luminosity over the course of...Go to contribution page
-
Mr Laurent Garnier (LAL-IN2P3-CNRS)24/05/2012, 13:30New developments on visualization drivers in Geant4 software toolkitGo to contribution page
-
Lorenzo Moneta (CERN)24/05/2012, 13:30ROOT, a data analysis framework, provides advanced numerical and statistical methods via the ROOT Math work package. Now that the LHC experiments have started to analyze their data and produce physics results, we have acquired experience in the way these numerical methods are used and the libraries have been consolidated taking into account also the received feedback. At the same time,...Go to contribution page
-
Andrew Norman (Fermilab)24/05/2012, 13:30The NOvA experiment at Fermi National Accelerator Lab features a free running, continuous readout system without dead time, which collects and buffers time-continuous data from over 350,000 readout channels. The raw data must be searched to correlate it with beam spill events from the NuMI beam facility. They are also analyzed in real-time to identify event topologies of interest. The...Go to contribution page
-
Felice Pantaleo (University of Pisa (IT))24/05/2012, 13:30Data analyses based on evaluation of likelihood functions are commonly used in the high-energy physics community for fitting statistical models to data samples. The likelihood functions require the evaluation of several probability density functions on the data. This is accomplished using loops. For the evaluation operations, the standard accuracy is double precision floating point. The...Go to contribution page
-
Miao HE (Institute of High Energy Physics, Chinese Academy of Sciences)24/05/2012, 13:30Neutrino flavor oscillation is characterized by three mixing angles. The Daya Bay reactor antineutrino experiment is designed to determine the last unknown mixing angle $\theta$_{13}. The experiment is located in southern China, near the Daya Bay nuclear power plant. Eight identical liquid scintillator detectors are being installed in three experimental halls, to detect antineutrinos released...Go to contribution page
-
Dmitry Arkhipkin (Brookhaven National Laboratory)24/05/2012, 13:30The STAR Experiment further exploits scalable message-oriented model principles to achieve a high level of control over online data streams. In this report we present an AMQP-powered Message Interface and Reliable Architecture framework (MIRA), which allows STAR to orchestrate the activities of Metadata Collection, Monitoring, Online QA and several Run-Time / Data Acquisition system...Go to contribution page
-
Artur Szostak (University of Bergen (NO))24/05/2012, 13:30The ALICE High Level Trigger (HLT) is a dedicated real-time system for on-line event reconstruction and triggering. Its main goal is to reduce the large volume of raw data that is read out from the detector systems, up to 25 GB/s, by an order of magnitude to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When a...Go to contribution page
-
Dave Dykstra (Fermi National Accelerator Lab. (US))24/05/2012, 13:30The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each of the central servers at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy squid servers,...Go to contribution page
-
Markus Frank (CERN)24/05/2012, 13:30Today's computing elements for software based high level trigger processing (HLT) are based on nodes with multiple cores. Using process based parallelisation to filter particle collisions from the LHCb experiment on such nodes leads to expensive consumption of read-only memory and hence significant cost increase. In the following an approach is presented to fork multiple identical processes...Go to contribution page
-
Sylvain Chapeland (CERN)24/05/2012, 13:30ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600...Go to contribution page
-
Mr Kyle Gross (Open Science Grid / Indiana University)24/05/2012, 13:30Large distributed computing collaborations, such as the WLCG, face many issues when it comes to providing a working grid environment for their users. One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable...Go to contribution page
-
Mr Igor Kulakov (Goethe Universitaet Frankfurt)24/05/2012, 13:30The CBM experiment is a future fixed-target experiment at FAIR/GSI (Darmstadt, Germany). It is being designed to study heavy-ion collisions at extremely high interaction rates. The main tracking detectors are the Micro-Vertex Detector (MVD) and the Silicon Tracking System (STS). Track reconstruction in these detectors is very complicated task because of several factors. Up to 1000 tracks per...Go to contribution page
-
Mr Igor Kulakov (Goethe Universitaet Frankfurt)24/05/2012, 13:30Modern heavy-ion experiments operate with very high data rates and track multiplicities. Because of time constraints the speed of the reconstruction algorithms is crucial both for the online and offline data analysis. Parallel programming is considered nowadays as one of the most efficient ways to increase the speed of event reconstruction. Reconstruction of short-lived particles is one of...Go to contribution page
-
Felice Pantaleo (CERN), Julien Leduc24/05/2012, 13:30Data analyses based on evaluation of likelihood functions are commonly used in the high energy physics community for fitting statistical models to data samples. These procedures require several evaluations of these functions and they can be very time consuming. Therefore, it becomes particularly important to have fast evaluations. This paper describes a parallel implementation that allows to...Go to contribution page
-
Dr Alan Dion (Brookhaven National Laboratory)24/05/2012, 13:30An algorithm is presented which reconstructs helical tracks in a solenoidal magnetic field using a generalized Hough Transform. While the problem of reconstructing helical tracks from the primary vertex can be converted to the problem of reconstructing lines (with 3 parameters), reconstructing secondary tracks requires a full helix to be used (with 5 parameters). The Hough transform memory...Go to contribution page
-
Rolf Seuster (Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut))24/05/2012, 13:30In 2011 the LHC provided excellent data, the integrated luminosity of about 5fb-1 was more than what was expected. The price for this huge data set is the in and out of time pileup, additional soft events overlaid on top of the interesting event. The reconstruction software is very sensitive to these additional particles in the event, as the reconstruction time increases due to increased...Go to contribution page
-
Andrea Bocci (CERN)24/05/2012, 13:30The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented using FPGA and custom ASIC technology, and the High Level Trigger (HLT), implemented running a streamlined version of the CMS offline reconstruction software on a cluster of commercial rack-mounted computers, comprising thousands of CPUs. The design of a software trigger system requires a...Go to contribution page
-
Johannes Ebke (Ludwig-Maximilians-Univ. Muenchen (DE))24/05/2012, 13:30Historically, HEP event information for final analysis is stored in Ntuples or ROOT Trees and processed using ROOT I/O, usually resulting in a set of histograms or tables. Here we present an alternative data processing framework, leveraging the Protocol Buffer open-source library, developed and used by Google Inc. for loosely coupled interprocess communication and serialization. We...Go to contribution page
-
Dr Jason Webb (Brookhaven National Lab)24/05/2012, 13:30Faced with the abundance of geometry models available within the HENP community, long running experiments face a daunting challenge: how to migrate legacy GEANT3 based detector geometries to new technologies, such as the ROOT/TGeo framework [1]. One approach, entertained by the community for some time, is to introduce a level of abstraction: implementing the geometry in a higher order...Go to contribution page
-
Gabriela Hoff (CERN)24/05/2012, 13:30Physics models and algorithms operating in the condensed transport scheme - multiple scattering and energy loss of charged particles - play a critical role in the simulation of energy deposition in detectors. Geant4 algorithms pertinent to this domain involve a number of parameters and physics modeling approaches, which have evolved in the course of the years. Results in the literature...Go to contribution page
-
Prof. Nobu Katayama (HIGH ENERGY ACCELERATOR RESEARCH ORGANIZATION)24/05/2012, 13:30Dark Energy is one of the most intriguing questions in the field of particle physics and cosmology. We expect the first lignt of Hyper Suprime Cam (HSC) at the Subaru Telescope on top of Mauna Kea in Hawaii island in 2012. HSC will measure the shapes of billions of galaxies precisely to construct the 3D map of the dark matter in the universe, characterizing the properties of dark energy. We...Go to contribution page
-
Dr Fabio Cossutti (Universita e INFN (IT))24/05/2012, 13:30The production of simulated samples for physics analysis at LHC represents a noticeable organization challenge, because it requires the management of several thousands different workflows. The submission of a workflow to the grid based computing infrastructure is just the arrival point of a long decision process: definition of the general characteristics of a given set of coherent samples,...Go to contribution page
-
Axel Naumann (CERN)24/05/2012, 13:30C++11 is a new standard for the C++ language that includes several additions to the core language and that extends the C++ standard library. New features, such as move semantics, are expected to bring performance benefits and as soon as these benefits have been demonstrated, it will undoubtedly become widely adopted in the development of HEP code. However it will be shown that this may well be...Go to contribution page
-
Mr Pierre Vande Vyvre (CERN)24/05/2012, 13:30In November 2009, after 15 years of design and installation, the ALICE experiment started to detect and record the first collisions produced by the LHC. It has been collecting hundreds of millions of events ever since with both proton-proton and heavy ion collision. The future scientific programme of ALICE has been refined following the first year of data taking. The physics targeted beyond...Go to contribution page
-
Graeme Andrew Stewart (CERN)24/05/2012, 13:30Abstract: The ATLAS experiment at the LHC collider recorded more than 3 fb-1 data of pp collisions at the center of mass energy of 7 TeV by September 2011. The recorded data are promptly reconstructed in two steps at a large computing farm at CERN to provide fast access to high quality data for physics analysis. In the first step a subset of the collision data corresponding to 10 Hz is...Go to contribution page
-
Dr itay Yavin (New-York University)24/05/2012, 13:30Searches for new physics by experimental collaborations represent a significant investment in time and resources. Often these searches are sensitive to a broader class of models than they were originally designed to test. It is possible to extend the impact of existing searches through a technique we call 'recasting'. We present RECAST, a framework designed to facilitate the usage of this technique.Go to contribution page
-
Jose Manuel Quesada Molina (Universidad de Sevilla (ES))24/05/2012, 13:30The final stages of a number of generators of inelastic hadron/ion interactions with nuclei in Geant4 are described by native pre-equilibrium and de-excitation models. The pre-compound model is responsible for pre-equilibrium emission of protons, neutrons and light ions. The de-excitation model provides sampling of evaporation of neutrons, protons and light fragments up to magnesium. Fermi...Go to contribution page
-
Fernando Lucas Rodriguez (CERN)24/05/2012, 13:30The Detector Control System of the TOTEM experiment at the LHC is built with the industrial product WinCC OA (PVSS). The TOTEM system is generated automatically through scripts using as input the detector PBS structure and pinout connectivity, archiving and alarm meta-information, and some other heuristics based on the naming conventions. When those initial parameters and code are modified to...Go to contribution page
-
Danilo Piparo (CERN)24/05/2012, 13:30The estimation of the compatibility of large amounts of histogram pairs is a recurrent problem in High Energy Physics. The issue is common to several different areas, from software quality monitoring to data certification, preservation and analysis. Given two sets of histograms, it is very important to be able to scrutinize the outcome of several goodness of fit tests, obtain a clear answer...Go to contribution page
-
Douglas Michael Schaefer (University of Pennsylvania (US))24/05/2012, 13:30Since starting in 2010, the Large Hadron Collider (LHC) has produced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high efficiency and excellent data quality. Events are selected using a three-level trigger system, where each level makes a more rened selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger...Go to contribution page
-
Wouter Verkerke (NIKHEF (NL))24/05/2012, 13:30RooFit is a library of C++ classes that facilitate data modeling in the ROOT environment. Mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. The package provides a flexible framework for building complex fit models through classes that mimic math operators. For all constructed models RooFit provides a concise yet powerful...Go to contribution page
-
Sven Kreiss (New York University (US))24/05/2012, 13:30Software Engineering, Data Stores and Databases (track 5)ParallelRooStats is a project providing advanced statistical tools required for the analysis of LHC data, with emphasis on discoveries, confidence intervals, and combined measurements in the both the Bayesian and Frequentist approaches. The tools are built on top of the RooFit data modeling language and core ROOT mathematics libraries and persistence technology. These tools have been developed in...Go to contribution page
-
Gordon Watts (University of Washington (US))24/05/2012, 13:30ROOT.NET provides an interface between Microsoft’s Common Language Runtime (CLR) and .NET technology and the ubiquitous particle physics analysis tool, ROOT. ROOT.NET automatically generates a series of efficient wrappers around the ROOT API. Unlike pyROOT, these wrappers are statically typed and so are highly efficient as compared to the Python wrappers. The connection to .NET means that one...Go to contribution page
-
Axel Naumann (CERN)24/05/2012, 13:30We will present new approaches to implementing quality control procedures in the development of the ROOT data processing framework. A multi-platform, cloud-based infrastructure is used for supporting the incremental build and test procedures employed in the ROOT software development process. Tests run continuously and a custom generic tool has been adopted for CPU and heap regression...Go to contribution page
-
Zhechka Toteva (CERN)24/05/2012, 13:30The Information Technology (IT) and the General Services (GS) departments at CERN have decided to combine their extensive experience in support for IT and non-IT services towards a common goal – to bring the services closer to the end user based on ITIL best practice. The collaborative efforts have so far produced definitions for the incident and the request fulfillment processes which are...Go to contribution page
-
Sebouh Paul (Jefferson Lab)24/05/2012, 13:30In the advent of the 12 GeV upgrade at CEBAF, it becomes necessary to create new detectors to accommodate the more powerful beam-line. It follows that new software is needed for tracking, simulation and event display. In the case of CLAS12, the new detector to be installed in Hall B, development has proceeded on new analysis frameworks and runtime environments, such as the Clara (CLAS12...Go to contribution page
-
Mizuki Karasawa (BNL)24/05/2012, 13:30In BNL, we are planning to establish a federation with different organizations by using a SSO technology - Shibboleth. It provides the underlying mechanism for leveraging institutional authentication and exchanging of user attributes for authorization. This framework will allow us to collaborate not only with organizations inside of BNL but institutions/organizations outside of BNL to be able...Go to contribution page
-
Martin Barisits (Vienna University of Technology (AT))24/05/2012, 13:30The ATLAS Distributed Data Management system stores more than 75PB of physics data across 100 sites globally. Over 8 million files are transferred daily with strongly varying usage patterns. For performance and scalability reasons it is imperative to adapt and improve the data management system continuously. Therefore future system modifications in hardware, software as well as policy,...Go to contribution page
-
Peter Wegner (Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen, Germany)24/05/2012, 13:30The CTA (Cherenkov Telescope Array) project is an initiative to build the next generation ground-based very high energy (VHE) gamma-ray instrument. Compared to current imaging atmospheric Cherenkov telescope experiments CTA will extend the energy range and improve the angular resolution while increasing the sensitivity by a factor of 10. With these capabilities it is expected that CTA will...Go to contribution page
-
Raul Murillo Garcia (University of California Irvine (US))24/05/2012, 13:30The ATLAS Cathode Strip Chamber system consists of two end-caps with 16 chambers each. The CSC Readout Drivers (RODs) are purpose-built boards encapsulating 13 DSPs and around 40 FPGAs. The principal responsibility of each ROD is for the extraction of data from two chambers at a maximum trigger rate of 75 kHz. In addition, each ROD is in charge of the setup, control and monitoring of the...Go to contribution page
-
Robert Kutschke (Femilab)24/05/2012, 13:30The Mu2e experiment at Fermilab is in proceeding through its R&D and approval processes. Two critical elements of R&D towards a design that will achieve the physics goals are an end-to-end simulation package and reconstruction code that has reached the stage of an advanced prototype. These codes live within the environment of the experiment's intrastructure software. Mu2e uses art as the...Go to contribution page
-
Mark Hodgkinson (University of Sheffield), Rolf Seuster (Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut) (D)24/05/2012, 13:30The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running...Go to contribution page
-
Tobias Stockmanns (Forschungszentrum Jülich GmbH)24/05/2012, 13:30Modern experiments in hadron and particle physics are searching for more and more rare decays which have to be extracted out of a huge background of particles. To achieve this goal a very high precision of the experiments is required which has to be reached also from the simulation software. Therefore a very detailed description of the hardware of the experiment is needed including also tiny...Go to contribution page
-
Luca Tomassetti (University of Ferrara and INFN)24/05/2012, 13:30The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. Since 2009 the SuperB Computing group is...Go to contribution page
-
Dr Jack Cranshaw (Argonne National Laboratory (US))24/05/2012, 13:30TAGs are event-level metadata allowing a quick search for interesting events for further analysis, based on selection criteria defined by the user. They are stored in a file-based format as well as in relational databases. The overall TAG system architecture encompasses a range of interconnected services that provide functionality for the required use cases such as event level selection,...Go to contribution page
-
Federico Ronchetti (Istituto Nazionale Fisica Nucleare (IT))24/05/2012, 13:30The ALICE detector yields a huge sample of data, via millions of channels from different sub-detectors. On-line data processing must be applied to select and reduce the data volume in order to increase the significant information in the stored data. ALICE applies a multi-level hardware trigger scheme where fast detectors are used to feed a three-level deep chain, L0-L2. The High-Level...Go to contribution page
-
Sylvain Chapeland (CERN)24/05/2012, 13:30ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The 18 ALICE sub-detectors are regularly calibrated in order to achieve most accurate physics measurements. Some of these procedures are done online in the DAQ (Data Acquisition System) so that calibration...Go to contribution page
-
Linghui Wu24/05/2012, 13:30BESIII/BEPCII is a major upgrade of the BESII experiment at the Beijing Electron-Positron Collider (BEPC) for studies of hadron spectroscopy and tau-charm physics. The BESIII detector adopts a small cell helium-based drift chamber (MDC) as the cetral tracking detector. The momentum resolution was deteriorated due to misalignment in the data taking. In order to improve the momentum resolution,...Go to contribution page
-
Gancho Dimitrov (Brookhaven National Laboratory (US))24/05/2012, 13:30The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, data replications to other computing centers, etc. The Oracle Relational Database Management System has been addressing the ATLAS database requirements...Go to contribution page
-
Will Buttinger (University of Cambridge (GB))24/05/2012, 13:30The ATLAS Level-1 Trigger is the first stage of event selection for the ATLAS experiment at the LHC. In order to identify the interesting collisions events to be passed on to the next selection stage within a latency of less than 2.5 us, it is based on custom-built electronics. Signals from the Calorimeter and Muon Trigger System are combined in the Central Trigger Processor which processes...Go to contribution page
-
Alexander Oh (University of Manchester (GB))24/05/2012, 13:30The ATLAS experiment at CERN's Large Hadron Collider (LHC) has taken data with colliding beams at instantaneous luminosities of 2*10^33 cm^-2 s^-1. The LHC targets to deliver an integrated luminosity 5-fb in the run period 2011 at luminosities of up to 5*10^33 cm^-2 s^-1, which requires dedicated strategies to guard the highest physics output while reducing effectively the event rate. The...Go to contribution page
-
Amir Farbin (University of Texas at Arlington (US))24/05/2012, 13:30The ATLAS experiment has collected vast amounts of data with the arrival of the inverse-femtobarn era at the LHC. ATLAS has developed an intricate analysis model with several types of derived datasets, including their grid storage strategies, in order to make data from O(109) recorded events readily available to physicists for analysis. Several use cases have been considered in the ATLAS...Go to contribution page
-
Andrei Cristian Spataru (CERN)24/05/2012, 13:30The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred...Go to contribution page
-
Ruben Domingo Gaspar Aparicio (CERN)24/05/2012, 13:30At CERN, and probably elsewhere, centralised Oracle-database services deliver high levels of service performance and reliability but are sometimes perceived as overly rigid and inflexible for initial application development. As a consequence a number of key database applications are running on user-managed MySQL database services. This is all very well when things are going well, but the...Go to contribution page
-
Kerstin Lantzsch (Bergische Universitaet Wuppertal (DE))24/05/2012, 13:30The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by...Go to contribution page
-
Mr Arthur Franke (Columbia University)24/05/2012, 13:30The Double Chooz reactor antineutrino experiment employs a network-distributed DAQ divided among a number of computing nodes on a Local Area Network. The Double Chooz Online Monitor Framework has been developed to provide short-timescale, real-time monitoring of multiple distributed DAQ subsystems and serve diagnostic information to multiple clients. Monitor information can be accessed...Go to contribution page
-
Matthew Toups (Columbia University)24/05/2012, 13:30The Double Chooz experiment searches for reactor neutrino oscillations at the Chooz nuclear power plant. A client/server model is used to coordinate actions among several online systems over TCP/IP sockets. A central run control server synchronizes data-taking among two independent data acquisition (DAQ) systems via a common communication protocol and state machine definition. Calibration...Go to contribution page
-
Jason Zurawski (Internet2)24/05/2012, 13:30Computer Facilities, Production Grids and Networking (track 4)ParallelScientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources,...Go to contribution page
-
Luca Magnoni (CERN)24/05/2012, 13:30A large experiment like ATLAS at LHC (CERN), with over three thousand members and a shift crew of 15 people running the experiment 24/7, needs an easy and reliable tool to gather all the information concerning the experiment development, installation, deployment and exploitation over its lifetime. With the increasing number of users and the accumulation of stored information since the...Go to contribution page
-
Andrea Negri (Universita e INFN (IT))24/05/2012, 13:30Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of...Go to contribution page
-
Dr Ivana Hrivnacova (IPN Orsay, CNRS/IN2P3)24/05/2012, 13:30The Virtual Monte Carlo (VMC) provides the abstract interface into the Monte Carlo transport codes: GEANT3, Geant4 and FLUKA. The user VMC based application, independent from the specific Monte Carlo codes, can be then run with all three simulation programs. The VMC has been developed by the ALICE Offline Project and since then it draw attention in more experimental frameworks. Since its...Go to contribution page
-
Michael Steder (DESY)24/05/2012, 13:30The H1 data preservation project was started in 2009 as part of the global data preservation in high-energy physics (DPHEP) initiative. In order to retain the full potential for future improvements, the H1 collaboration aims for level 4 of the DPHEP recommendations, requiring the full simulation and reconstruction chain to be available for analysis. A major goal of the H1 project is...Go to contribution page
-
Tony Cass (CERN)24/05/2012, 13:30Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe HEPiX Virtualisation Working Group has sponsored the development of policies and technologies that permit Grid sites to safely instantiate remotely generated virtual machine images confident in the knowledge that they will be able to meet their obligations, most notably in terms of guaranteeing the accountability and traceability of any Grid Job activity at their site. We will present...Go to contribution page
-
Eduard Avetisyan (DESY)24/05/2012, 13:30We discuss the steps and efforts required to secure the continued analysis and data access for the HERMES experiment after the end of the active collaboration period. The model for such an activity has been developed within the framework of the DPHEP initiative in a close collaboration of HERA experiments and the DESY IT. For HERMES the preservation scheme foresees a possibility of full data...Go to contribution page
-
Mr Victor Diez Gonzalez (CERN fellow)24/05/2012, 13:30The LCG Applications Area relies on regular integration testing of the provided software stack. In the past, regular builds have been provided using a system which has been changed and developed constantly adding new features like server-client communication, long-term history of results and a summary web interface using present-day web technologies. However, the ad-hoc style of software...Go to contribution page
-
Dr Antony Wilson (STFC - Science & Technology Facilities Council (GB))24/05/2012, 13:30The configuration database (CDB) is the memory of the Muon Ionisation Cooling Experiment (MICE). Its principle aim is to store temporal data associated with the running conditions of the experiment. These data can change on a per run basis (e.g. magnet currents, high voltages), or on long time scales (e.g. cabling, calibration, and geometry). These data are used throughout the life cycle of...Go to contribution page
-
365. The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality AnalysisAndressa Sivolella Gomes (Univ. Federal do Rio de Janeiro (BR))24/05/2012, 13:30The Tile Calorimeter (TileCal), one of the ATLAS detectors. has four partitions, where each one contains 64 modules and each module has up to 48 PhotoMulTipliers (PMTs), totalizing more than 10,000 electronic channels. The Monitoring and Calibration Web System (MCWS) supports data quality analyses at channels level. This application was developed to assess the detector status and verify its...Go to contribution page
-
Andrzej Dworak (CERN)24/05/2012, 13:30The Controls Middleware (CMW) project was launched over ten years ago. Its main goal was to unify middleware solutions used to operate CERN accelerator complex. A key part of the project, the equipment access library RDA, was based on CORBA, an unquestionable middleware standard at the time. RDA became an operational and critical part of the infrastructure, yet the demanding run-time...Go to contribution page
-
Andrew Norman (Fermilab)24/05/2012, 13:30The NOvA experiment at Fermi National Accelerator Lab, uses a sophisticated timing distribution system to perform synchronization of more than 12,000 front end readout and data acquisition systems at both the near detector and accelerator complex located at Fermilab and at the far detector located 810km away at Ash River, MN. This global synchronization is performed to an absolute clock time...Go to contribution page
-
Roland Sipos (Hungarian Academy of Sciences (HU))24/05/2012, 13:30NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) is an experiment at the CERN SPS using the upgraded NA49 hadron spectrometer. Among its physics goals are precise hadron production measurements for improving calculations of the neutrino beam flux in the T2K neutrino oscillation experiment as well as for more reliable simulations of cosmic-ray air showers. Moreover, p+p, p+Pb and...Go to contribution page
-
Dr John Marshall (University of Cambridge (GB))24/05/2012, 13:30Pandora is a robust and efficient framework for developing and running pattern-recognition algorithms. It was designed to perform particle flow calorimetry, which requires many complex pattern-recognition techniques to reconstruct the paths of individual particles through fine granularity detectors. The Pandora C++ software development kit (SDK) consists of a single library and a number of...Go to contribution page
-
Mr Igor Soloviev (University of California Irvine (US))24/05/2012, 13:30To configure data taking run the ATLAS systems and detectors store more than 150 MBytes of data acquisition related configuration information in OKS[1] XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems with configuring of a run caused by XML syntax errors or...Go to contribution page
-
Katarzyna Wichmann (DESY)24/05/2012, 13:30A project to allow long term access and physics analysis of ZEUS data (ZEUS data preservation) has been established in collaboration with the DESY-IT group. In the ZEUS approach the analysis model is based on the Common Ntuple project, under development since 2006. The real data and all presently available Monte Carlo samples are being preserved in a flat ROOT ntuple format. There is...Go to contribution page
-
Scott Snyder (Brookhaven National Laboratory (US))24/05/2012, 13:30The final step in a HEP data-processing chain is usually to reduce the data to a `tuple' form which can be efficiently read by interactive analysis tools such as ROOT. Often, this is implemented independently by each group analyzing the data, leading to duplicated effort and needless divergence in the format of the reduced data. ATLAS has implemented a common toolkit for performing this...Go to contribution page
-
Christoph Wasicki (Deutsches Elektronen-Synchrotron (DE)), Heather Gray (CERN), Simone Pagan Griso (Lawrence Berkeley National Lab. (US))24/05/2012, 13:30The track and vertex reconstruction algorithms of the ATLAS Inner Detector have demonstrated excellent performance in the early data from the LHC. However, the rapidly increasing number of interactions per bunch crossing introduces new challenges both in computational aspects and physics performance. We will discuss the strategy adopted by ATLAS in response to this increasing multiplicity by...Go to contribution page
-
Anthony Morley (CERN)24/05/2012, 13:30The Large Hadron Collider (LHC) at CERN is the world's largest particle accelerator, which collides proton beams at an unprecedented centre of mass energy of 7 TeV. ATLAS is a multipurpose experiment that records the products of the LHC collisions. In order to reconstruct the trajectories of charged particles produced in these collisions, ATLAS is equipped with a tracking system (Inner...Go to contribution page
-
Johannes Mattmann (Johannes-Gutenberg-Universitaet Mainz (DE))24/05/2012, 13:30The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore...Go to contribution page
-
Patrick Czodrowski (Technische Universitaet Dresden (DE))24/05/2012, 13:30Hadronic tau decays play a crucial role in taking Standard Model measurements as well as in the search for physics beyond the Standard Model. However, hadronic tau decays are difficult to identify and trigger on due to their resemblance to QCD jets. Given the large production cross section of QCD processes, designing and operating a trigger system with the capability to efficiently select...Go to contribution page
-
Gordon Watts (University of Washington (US))24/05/2012, 13:30Particle physics conferences and experiments generate a huge number of plots and presentations. It is impossible to keep up. A typical conference (like CHEP) will have 100's of plots. A single analysis result from a major experiment will have almost 50 plots. Scanning a conference or sorting out what plots are new is almost a full time job. The advent of multi-core computing and advanced video...Go to contribution page
-
Prof. Martin Erdmann (Rheinisch-Westfaelische Tech. Hoch. (DE))24/05/2012, 13:30The Visual Physics Analysis (VISPA) project addresses the typical development cycle of (re-)designing, executing, and verifying an analysis. It presents an integrated graphical development environment for physics analyses, using the Physics eXtension Library (PXL) as underlying C++ analysis toolkit. Basic guidance to the project is given by the paradigms of object oriented programming, data...Go to contribution page
-
Maria Alandes Pradillo (CERN)24/05/2012, 13:30The EMI project is based on the collaboration of four major middleware projects in Europe, all already developing middleware products and having their pre-existing strategies for developing, releasing and controlling their software artefacts. In total, the EMI project is made up of about thirty development individual teams, called “Product Teams” in EMI. A Product Team is responsible for the...Go to contribution page
-
Dr Torsten Antoni (KIT - Karlsruhe Institute of Technology (DE))24/05/2012, 13:30The xGUS helpdesk template is aimed at NGIs, DCIs and user communities wanting to structure their user support and integrate it with the EGI support. xGUS contains all basic helpdesk functionalities. It is hosted and maintained at KIT in Germany. Portal administrators from the client DCI or user community can customize the portal to their specific needs. Via web, they can edit the support...Go to contribution page
-
Marek Gayer (CERN)24/05/2012, 13:55Software Engineering, Data Stores and Databases (track 5)ParallelWe present our effort for the creation of a new software library of geometrical primitives, which are used for solid modelling in Monte Carlo detector simulations. We plan to replace and unify current geometrical primitive classes in the CERN software projects Geant4 and ROOT with this library. Each solid is represented by a C++ class with methods suited for measuring distances of particles...Go to contribution page
-
Johannes Rauch (Technische Universität München)24/05/2012, 13:55A pattern recognition software for a continuously operating high rate Time Projection Chamber with Gas Electron Multiplier amplification (GEM-TPC) has been designed and tested. A track-independent clustering algorithm delivers space points. A true 3-dimensional track follower combines them to helical tracks, without constraints on the vertex position. Fast helix fits, based on a conformal...Go to contribution page
-
Jeff Templon (NIKHEF (NL))24/05/2012, 13:55Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThis contribution describes a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for multiple-VO grid infrastructures. Two goals drove the project: firstly to provide a "native view" of the grid for desktop-type users, and secondly to improve performance for physics-analysis type...Go to contribution page
-
Steve Barnet (University of Wisconsin Madison)24/05/2012, 13:55Computer Facilities, Production Grids and Networking (track 4)ParallelBesides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at...Go to contribution page
-
Mr Federico Carminati (CERN)24/05/2012, 14:20Detector simulation is one of the most CPU intensive tasks in modern High Energy Physics. While its importance for the design of the detector and the estimation of the efficiency is ever increasing, the amount of events that can be simulated is often constrained by the available computing resources. Various kind of "fast simulations" have been developed to alleviate this problem, however,...Go to contribution page
-
Bertrand Bellenot (CERN)24/05/2012, 14:20Software Engineering, Data Stores and Databases (track 5)ParallelA JavaScript version of the ROOT I/O subsystem is being developed, in order to be able to browse (inspect) ROOT files in a platform independent way. This allows the content of ROOT files to be displayed in most web browsers, without having to install ROOT or any other software on the server or on the client. This gives a direct access to ROOT files from new (e.g. portable) devices in a light...Go to contribution page
-
Dr Armando Fella (INFN Pisa)24/05/2012, 14:20Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. The increasing network performance also...Go to contribution page
-
Mr Pier Paolo Ricci (INFN CNAF)24/05/2012, 14:20Computer Facilities, Production Grids and Networking (track 4)ParallelThe storage solution currently used in production at the INFN Tier-1 at CNAF, is the result of several years of case studies, software development and tests. This solution, called the Grid Enabled Mass Storage System (GEMSS), is based on a custom integration between a fast and reliable parallel filesystem (IBM GPFS), with a complete integrated tape backend based on TIVOLI TSM Hierarchical...Go to contribution page
-
Dr Balazs Konya (Lund University (SE))24/05/2012, 14:45Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelScientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often...Go to contribution page
-
David Tuckett (CERN)24/05/2012, 14:45Software Engineering, Data Stores and Databases (track 5)ParallelImprovements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in...Go to contribution page
-
Artur Jerzy Barczyk (California Institute of Technology (US)), Azher Mughal (California Institute of Technology), sandor Rozsa (California Institute of Technology (CALTECH))24/05/2012, 14:45Computer Facilities, Production Grids and Networking (track 4)Parallel40Gb/s network technology is increasingly available today in the data centers as well as in the network backbones. We have built and evaluated storage systems equipped with the last generation of 40GbE Network Interface Cards. The recently available motherboards with the PCIe v3 bus provide the possibility to reach the full 40Gb/s rate per network interface. A fast caching system was built...Go to contribution page
-
Stephan G. Hageboeck (University of Bonn)24/05/2012, 14:45Three dimensional image reconstruction in medical imaging applies sophisticated filter algorithms to linear trajectories of coincident photon pairs in PET. The goal is to reconstruct an image of a source density distribution. In a similar manner, tracks in particle physics originate from vertices that need to be distinguished from background track combinations. We investigate if methods from...Go to contribution page
-
Durga Rajaram (IIT, Chicago)24/05/2012, 15:10Software Engineering, Data Stores and Databases (track 5)ParallelThe Muon Ionization Cooling Experiment (MICE) has developed the MICE Analysis User Software (MAUS) to simulate and analyse experimental data. It serves as the primary codebase for the experiment, providing for online data quality checks and offline batch simulation and reconstruction. The code is structured in a Map-Reduce framework to allow parallelization whether on a personal machine or in...Go to contribution page
-
Tim Bell (CERN)24/05/2012, 15:10Computer Facilities, Production Grids and Networking (track 4)ParallelThe CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future, and in the likely scenario that any extension will be remote from CERN, and in the light of the way other large facilities are today being operated. Over the past six months, CERN has been investigating modern and widely-used tools and procedures used for virtualisation,...Go to contribution page
-
Mrs Ruth Pordes (Fermi National Accelerator Lab. (US))24/05/2012, 15:10Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelAs it enters adolescence the Open Science Grid (OSG) is bringing a maturing fabric of Distributed High Throughput Computing (DHTC) services that supports an expanding HEP community to an increasingly diverse spectrum of domain scientists. Working closely with researchers on campuses throughout the US and in collaboration with national cyberinfrastructure initiatives, we transform their...Go to contribution page
-
Jakob Lettenbichler (HEPHY Vienna, Austria), Mr Moritz Nadler (Austrian Academy of Sciences (AT)), Rudolf Fruhwirth (Austrian Academy of Sciences (AT))24/05/2012, 15:10The Silicon Vertex Detector (SVD) of the Belle II experiment is a newly developed device with four measurement layers. The detector is designed to enable track reconstruction down to the lowest momenta possible, in order to significantly increase the effective data sample and the physics potential of the experiment. Both track finding and track fitting have to deal with these...Go to contribution page
-
jill gemmill24/05/2012, 15:45RIDER is an NSF-funded study (Award #1223688) of the current and future 2020 international data requirements of the science and engineering community, specifically flow of data into the US. Results will assist NSF in predicting future capacity requirements and planning funding for the International Research Network Connections (IRNC) programs. This BoF is an opportunity to provide your...Go to contribution page
-
Dr William Badgett (Fermilab)24/05/2012, 16:35The CDF Collider Detector at Fermilab ceased data collection on September 30, 2011 after over twenty five years of operation. We review the performance of the CDF Run II data acquisition systems over the last ten of these years while recording nearly 10 fb-1 of proton-antiproton collisions with a high degree of efficiency. Technology choices in the online control and configuration systems...Go to contribution page
-
Gerben Stavenga (Fermilab)24/05/2012, 16:35We present a GPU-based parton level event generator for multi-jet events at the LHC. The current implementation generates up to 10 jets with a possible vector boson. At leading order the speed increase over a single core CPU is in excess of a factor of 500 using a single desktop based NVIDIA Fermi GPU. We will also present results for the next-to-leading order implementation.Go to contribution page
-
Paul Millar (Deutsches Elektronen-Synchrotron (DE))24/05/2012, 16:35Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelFor over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. Over that time, it has satisfied the requirements of various demanding scientific user communities to store their data, transfer it between sites and fast, site-local access. When the dCache project started, the focus was...Go to contribution page
-
Andrzej Nowak (CERN openlab)24/05/2012, 16:35Software Engineering, Data Stores and Databases (track 5)ParallelAs the mainstream computing world has shifted from multi-core to many-core platforms, the situation for software developers has changed as well. With the numerous hardware and software options available, choices balancing programmability and performance are becoming a significant challenge. The expanding multiplicative dimensions of performance offer a growing number of possibilities that need...Go to contribution page
-
Mr Thomas Hauth (KIT - Karlsruhe Institute of Technology (DE))24/05/2012, 17:00Software Engineering, Data Stores and Databases (track 5)ParallelThe processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest...Go to contribution page
-
Gordon Watts (University of Washington (US))24/05/2012, 17:00The Tevatron Collider, located at the Fermi National Accelerator Laboratory, delivered its last 1.96 TeV proton-antiproton collisions on September 30th, 2011. The DZERO experiment continues to take cosmic data for final alignment for several more months . Since Run 2 started, in March 2001, all DZERO data has been collected by the DZERO Level 3 Trigger/DAQ System. The system is a modern,...Go to contribution page
-
Mr Heath Skarlupka (UW Madison)24/05/2012, 17:00GPGPU computing offers extraordinary increases in pure processing power for parallelizable applications. In IceCube we use GPUs for ray-tracing of cherenkov photons in the antarctic ice as part of detector simulation. We report on how we implemented the mixed simulation production chain to include the processing on the GPGPU cluster for the IceCube Monte-Carlo production. We also present ideas...Go to contribution page
-
Mr Zsolt Molnár (CERN)24/05/2012, 17:00Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelLHC experiments at CERN and worldwide utilize WLCG resources and middleware components to perform distributed computing tasks. One of the most important tasks is reliable file replication. It is a complex problem, suffering from transfer failures, disconnections, transfer duplication, server and network overload, differences in storage systems, etc. To address these problems, EMI and gLite...Go to contribution page
-
jill gemmill24/05/2012, 17:00RIDER is an NSF-funded study (Award #1223688) of the current and future 2020 international data requirements of the science and engineering community, specifically flow of data into the US. Results will assist NSF in predicting future capacity requirements and planning funding for the International Research Network Connections (IRNC) programs. This BoF is an opportunity to provide your...Go to contribution page
-
Dr Domenico Giordano (CERN), Fernando Harald Barreiro Megino (Universidad Autonoma de Madrid (ES))24/05/2012, 17:25Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelDuring the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data...Go to contribution page
-
Dr Krzysztof Korcyl (Polish Academy of Sciences (PL))24/05/2012, 17:25A novel architecture is being proposed for the data acquisition and trigger system for PANDA experiment at the HESR facility at FAIR/GSI. The experiment will run without the hardware trigger signal and use timestamps to correlate detector data from a given time window. The broad physics program in combination with high rate of 2 10^7 interactions require very selective filtering...Go to contribution page
-
Stefan Lohn (Universitaet Bonn (DE))24/05/2012, 17:25Software Engineering, Data Stores and Databases (track 5)ParallelChip multiprocessors are going to support massive parallelism to provide further processing capacities by adding more and more physical and logical cores. Unfortunately the growing number of cores come along with slower advances in speed and size of the main memory, the cache hierarchy, the front- side bus or processor interconnections. Parallelism can only result in performance...Go to contribution page
-
Dr Mohammad Al-Turany (GSI)24/05/2012, 17:25The high data rates expected from the planned detectors at FAIR (CBM, PANDA) call for dedicated attention with respect to the computing power needed in online (e.g. High level event selection) and offline analysis. The graphics processor units (GPUs) have evolved into high performance co-processors that can be easily programmed with common high-level language such as C, Fortran and C++. Todays...Go to contribution page
-
Andrew John Washbrook (University of Edinburgh (GB))24/05/2012, 17:50Multivariate classification methods based on machine learning techniques are commonly used for data analysis at the LHC in order to look for signatures of new physics beyond the standard model. A large variety of these classification techniques are contained in the Toolkit for Multivariate Analysis (TMVA) which enables training, testing, performance evaluation and application of the chosen...Go to contribution page
-
Wim Lavrijsen (Lawrence Berkeley National Lab. (US))24/05/2012, 17:50Software Engineering, Data Stores and Databases (track 5)ParallelThe Python programming language allows objects and classes to respond dynamically to the execution environment. Most of this, however, is made possible through language hooks which by definition can not be optimized and thus tend to be slow. The PyPy implementation of Python includes a tracing just in time compiler (JIT), which allows similar dynamic responses but at the interpreter-, rather...Go to contribution page
-
Tadashi Maeno (Brookhaven National Laboratory (US))24/05/2012, 17:50Distributed Processing and Analysis on Grids and Clouds (track 3)ParallelThe PanDA Production and Distributed Analysis System is the ATLAS workload management system for processing user analysis, group analysis and production jobs. In 2011 more than 1400 users have submitted jobs through PanDA to the ATLAS grid infrastructure. The system processes more than 2 million analysis jobs per week. Analysis jobs are routed to sites based on the availability of relevant...Go to contribution page
-
Dr Dirk Hoffmann (Universite d'Aix - Marseille II (FR))24/05/2012, 17:50We present the prototyping of a 10Gigabit-Ethernet based UDP data acquisition (DAQ) system that has been conceived in the context of the Array and Control group of CTA (Cherenkov Telescope Array). The CTA consortium plans to build the next generation ground-based gamma-ray instrument, with approximately 100 telescopes of at least three different sizes installed on two sites. The genuine camera...Go to contribution page
-
Remi Mommsen (Fermi National Accelerator Lab. (US))25/05/2012, 08:30
-
Johannes Elmsheuser (Ludwig-Maximilians-Univ. Muenchen (DE))25/05/2012, 09:00
-
Dr Adam Lyon (Fermilab)25/05/2012, 09:30
-
Andreas Heiss (KIT - Karlsruhe Institute of Technology (DE))25/05/2012, 10:30
-
David Lange (Lawrence Livermore Nat. Laboratory (US))25/05/2012, 11:00
-
25/05/2012, 11:30
-
David Groep (NIKHEF)25/05/2012, 12:00
-
532. Building, distributing and running big software projects on MacOSX... There is an app for that!Mr Giulio Eulisse (Fermi National Accelerator Lab. (US))Software Engineering, Data Stores and Databases (track 5)ParallelWe present CMS' experience in porting its full offline software stack to MacOSX. In the first part we will focus on the system level issues encountered while doing the port, in particular with respect to the different behavior of the compiler and linker in handling common symbols. In the second part we present our progress with an alternative approach of distributing large software projects...Go to contribution page
Choose timezone
Your profile timezone: