# Computing in High Energy and Nuclear Physics (CHEP) 2012

21-25 May 2012
New York City, NY, USA
US/Eastern timezone
Home > Contribution List
Displaying 577 contributions out of 577
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but disk space limitations mean that only 3 kHz can be written to tape for offline processing. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. Because the trigger effi ... More
Presented by Marco CATTANEO on 22 May 2012 at 14:20
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The LHCb experiment has been using the CMT build and configuration tool for its software since the first versions, mainly because of its multi-platform build support and its powerful configuration management functionality. Still, CMT has some limitations in terms of build performance and the increased complexity added to the tool to cope with new use cases added latterly. Therefore, we have been l ... More
Presented by Marco CLEMENCIC on 22 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The CDF Collider Detector at Fermilab ceased data collection on September 30, 2011 after over twenty five years of operation. We review the performance of the CDF Run II data acquisition systems over the last ten of these years while recording nearly 10 fb-1 of proton-antiproton collisions with a high degree of efficiency. Technology choices in the online control and configuration systems and f ... More
Presented by Dr. William BADGETT on 24 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
This contribution describes the design and development of a fully software-based Online test-bench for LHCb. The current “Full Experiment System Test” (FEST) is a programmable data injector with a test setup that runs using a simulated data acquisition (DAQ) chain. FEST is heavily used in LHCb by different groups, and thus the motivation for complete software emulation of the test-bench is to ... More
Presented by Niko NEUFELD, Vijay Kartik SUBBIAH on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
We present a GPU-based parton level event generator for multi-jet events at the LHC.  The current implementation generates up to 10 jets with a possible vector boson.  At leading order the speed increase over a single core CPU is in excess of a factor of 500 using  a single desktop based NVIDIA Fermi GPU.   We will also present results for the next-to-leading order implementation.
Presented by Gerben STAVENGA on 24 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
One of the main barriers against Grid widespread adoption in scientific communities stems from the intrinsic complexity of handling X.509 certificates, which represent the foundation of the Grid security stack. To hide this complexity, in recent years, several Grid portals have been proposed which, however, do not completely solve the problem, either requiring that users manage their own certific ... More
Presented by Marco BENCIVENNI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The accounting activity in a production computing Grid is of paramount importance in order to understand the utilization of the available resources. While several CPU accounting systems are deployed within the European Grid Infrastructure (EGI), storage accounting systems, that are stable enough to be adopted on a production environment, are not yet available. A growing interest is being put ... More
Presented by Andrea CRISTOFORI on 22 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services and more than 100,000 documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). ... More
Presented by Lucas TAYLOR on 21 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The analysis of the complex LHC data usually follows a standard path that aims at minimizing not only the amount of data but also the number of observables used. After a number of steps of slimming and skimming the data, the remaining few terabytes of ROOT files hold a selection of the events and a flat structure for the variables needed that can be more easily inspected and traversed in the final ... More
Presented by Dr. Isidro GONZALEZ CABALLERO on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The volume and diversity of metadata in an experiment of the size and scope of ATLAS is considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. Trigger information and data from the Large Hadron Collider itself provide cases in point, but examples abound. Metadata about logical or physics constructs, such as d ... More
Presented by Dr. David MALON on 21 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis and discuss changes that analys ... More
Presented by Sergey PANITKIN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We present t ... More
Presented by Chris BEE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The line between native and web applications is becoming increasingly blurred as modern web browsers are becoming powerful platforms on which applications can be run. Such applications are trivial to install and are readily extensible and easy to use. In an educational setting, web applications permit a way to rapidly deploy tools in a highly-restrictive computing environment. The I2U2 collabo ... More
Presented by Dr. Thomas MC CAULEY on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
After a long period of project-based funding,during which the improvement of the services provided to the user communities was the main focus, distributed computing infrastructures (DCIs), having reached and established production quality, now need to tackle the issue of long-term sustainability. With the transition from EGEE to EGI in 2010 the major part of the responsibility (especially financi ... More
Presented by Dr. Torsten ANTONI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Current network technologies like dynamic network circuits and emerging protocols like OpenFlow, enable the network as an active component in the context of data transfers. We present framework which provides a simple interface for scientists to move data between sites over Wide Area Network with bandwidth guarantees. Although the system hides the complexity from the end users, it was designed ... More
Presented by Ramiro VOICU on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The CMS distributed data analysis workflow assumes that jobs run in a different location to where their results are finally stored. Typically the user output must be transferred across the network from one site to another, possibly on a different continent or over links not necessarily validated for high bandwidth/high reliability transfer. This step is named stage-out and in CMS was originally im ... More
Presented by Daniele SPIGA, Mattia CINQUILLI, Hassen RIAHI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
To understand in detail cosmic magnetic fields and sources of Ultra High Energy Cosmic Rays (UHECRs) we have developed a Monte Carlo simulation for galactic and extragalactic propagation. In our approach we identify three different propagation regimes for UHECRs, the Milky Way, the local universe out to 110 Mpc, and the distant universe. For deflections caused by the Galactic magnetic field a l ... More
Presented by Mr. Gero MüLLER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Since the ALICE experiment began data taking in late 2009, the amount of end user jobs on the AliEn Grid has increased significantly. Presently 1/3 of the 30K CPU cores available to ALICE are occupied by jobs submitted by about 400 distinct users. The overall stability of the AliEn middleware has been excellent throughout the 2 years of running, but the massive amount of end-user analysis and its ... More
Presented by Costin GRIGORAS on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
The upgraded LHCb experiment, which is supposed to go into operation in 2018/19 will require a massive increase in its compute facilities. A new 2 MW data-centre is planned at the LHCb site. Apart from the obvious requirement of minimizing the cost, the data-centre has to tie in well with the needs of online processing, while at the same time staying open for future and offline use. We present our ... More
Presented by Niko NEUFELD on 21 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The Statistical Toolkit is an open source system specialized in the statistical comparison of distributions. It addresses requirements common to different experimental domains, such as simulation validation (e.g. comparison of experimental and simulated distributions), regression testing in software development and detector performance monitoring. The first development cycles concerned the prov ... More
Presented by Mr. Matej BATIC on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The goal for CMS computing is to maximise the throughput of simulated event generation while also processing the real data events as quickly and reliably as possible. To maintain this achievement as the quantity of events increases, since the beginning of 2011 CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent framework o ... More
Presented by Rapolas KASELIS on 22 May 2012 at 13:30
Session: Plenary
Presented by Mr. Federico CARMINATI on 24 May 2012 at 09:30
Session: Plenary
Presented by Markus KLUTE on 22 May 2012 at 08:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
We describe a low-cost Petabyte scale Lustre filesystem deployed for High Energy Physics. The use of commodity storage arrays and bonded ethernet interconnects makes the array cost effective, whilst providing high bandwidth to the storage. The filesystem is a POSIX filesytem, presented to the Grid using the StoRM SRM. The system is highly modular. The building blocks of the array, the Lustre O ... More
Presented by Christopher John WALKER, Dr. Alex MARTIN on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
Distributed storage systems are critical to the operation of the WLCG. These systems are not limited to fulfilling the long term storage requirements. They also serve data for computational analysis and other computational jobs. Distributed storage systems provide the ability to aggregate the storage and IO capacity of disks and tapes, but at the end of the day IO rate is still bound by the capabi ... More
Presented by Erik Mattias WADENSTEIN on 22 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Entering information industry, the most new technologies talked about are virtualization and cloud computing. Virtualization makes the heterogeneous resources transparent to users, and plays a huge role in large-scale data center management solutions. Cloud computing emerges as a revolution in computing science which bases on virtualization, demonstrating a gigantic advantage in resource sharing, ... More
Presented by Ms. qiulan HUANG on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS Grid Information System (AGIS) centrally stores and exposes static, dynamic and configuration parameters required to configure and to operate ATLAS distributed computing systems and services. AGIS is designed to integrate information about resources, services and topology of the ATLAS grid infrastructure from various independent sources including BDII, GOCDB, the ATLAS data management s ... More
Presented by Alexey ANISENKOV on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The GridKa center at the Karlsruhe Institute for Technology is the largest ALICE Tier-1 center. It hosts 40,000 HEPSEPC'06, approximately 2.75 PB of disk space and 5.25 PB of tape space for for A Large Ion Collider Experiment (ALICE), at the CERN LHC. These resources are accessed via the AliEn middleware. The storage is divided into two instances, both using the storage middleware xrootd. We will ... More
Presented by Dr. Christopher JUNG on 22 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The ALICE High Level Trigger (HLT) is capable of performing an online reconstruction of heavy-ion collisions. The reconstruction of particle trajectories in the Time Projection Chamber (TPC) is the most compute intensive step. The TPC online tracker implementation combines the principle of the cellular automaton and the Kalman filter. It has been accelerated by the usage of graphics cards (GPUs ... More
Presented by David Michael ROHR on 21 May 2012 at 13:55
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Since its successful start-up in 2010, the LHC has been performing outstandingly, providing to the experiments long periods of stable collisions and an integrated luminosity that greatly exceeds the planne ... More
Presented by Mr. Vasco CHIBANTE BARROSO on 21 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
ALICE is one of the four main experiments at the CERN Large Hadron Collider (LHC) in Geneva. The Alice Detector Control System (DCS) is responsible for the operation and monitoring of the 18 detectors of the experiment and of central systems, for collecting and managing alarms, data and commands. Furthermore, it is the central tool to monitor and verify the beam mode and conditions in order to en ... More
Presented by Ombretta PINAZZA on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The emerging of hybrid GPU-accelerated clusters in the supercomputing landscape is a matter of fact. In this framework we proposed a new INFN initiative, the QUonG project, aiming to deploy a high performance computing system dedicated to scientific computations leveraging on commodity multi-core processors coupled with last generation GPUs. The multi-node interconnection system is based on a po ... More
Presented by Laura TOSORATTO on 22 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The Distributed Data Management System DQ2 is responsible for the global management of petabytes of ATLAS physics data. DQ2 has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle, as RDBMS are well suited to enforce data integrity in online transaction processing application. Despite these advantages, concerns have been raised recently on the scalability of data w ... More
Presented by Mario LASSNIG on 21 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS Distributed Data Management project DQ2 is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. The DQ2 deletion service is one of the most important DDM services. This distributed service interacts with 3rd par ... More
Presented by Danila OLEYNIK on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Efficient distribution of physics data over ATLAS grid sites is one of the most important tasks for user data processing. ATLAS' initial static data distribution model over-replicated some unpopular data and under-replicated popular data, creating heavy disk space loads while under-utilizing some processing resources due to low data availability. Thus, a new data distribution mechanism was impleme ... More
Presented by Mikhail TITOV on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
This talk details variety of Monitoring tools used within the ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the Tier-0 facility at CERN after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centers distributed world-wide. We present an overview of ... More
Presented by Jaroslava SCHOVANCOVA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
This paper will summarize operational experience and improvements in ATLAS computing infrastructure during 2010 and 2011. ATLAS has had 2 periods of data taking, with many more events recorded in 2011 than in 2010. It ran 3 major reprocessing campaigns. The activity in 2011 was similar to that in 2010, but scalability issues had to be adressed due to the increase in luminosity and trigger rate. B ... More
Presented by Dr. Stephane JEZEQUEL, Graeme Andrew STEWART on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
ATLAS Distributed Computing organized 3 teams to support data processing at Tier-0 facility at CERN, data reprocessing, data management operations, Monte Carlo simulation production, and physics analysis at the ATLAS computing centers located world-wide. In this talk we describe how these teams ensure that the ATLAS experiment data is delivered to the ATLAS physicists in a timely manner in the gla ... More
Presented by Jaroslava SCHOVANCOVA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The production system for Grid Data Processing (GDP) handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge manag ... More
Presented by Pavel NEVSKI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The ATLAS data quality software infrastructure provides tools for prompt investigation of and feedback on collected data and propagation of these results to analysis users. Both manual and automatic inputs are used in this system. In 2011, we upgraded our framework to record all issues affecting the quality of the data in a manner which allows users to extract as much information (of the data) fo ... More
Presented by Steven Andrew FARRELL on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS Distributed Computing (ADC) project delivers production quality tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with large contingency the needed computing activities in the first years of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Development a ... More
Presented by Collaboration ATLAS on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
The newfound ability of Social Media to transform public communication back to a conversational nature provides HEP with a powerful tool for Outreach and Communication. By far, the most effective component of nearly any visit or public event is that fact that the students, teachers, media, and members of the public have a chance to meet and converse with real scientists. While more than 30,000 ... More
Presented by Steven GOLDFARB on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from the Panda job processing DB, ... More
Presented by Laura SARGSYAN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to data an ... More
Presented by Danila OLEYNIK on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Software packaging is indispensable part of build and prerequisite for deployment processes. Full ATLAS software stack consists of TDAQ, HLT, and Offline software. These software groups depend on some 80 external software packages. We present tools, package PackDist, developed and used to package all this software except for TDAQ project. PackDist is based on and driven by CMT, ATLAS software con ... More
Presented by Grigori RYBKIN on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
Multivariate classification methods based on machine learning techniques are commonly used for data analysis at the LHC in order to look for signatures of new physics beyond the standard model. A large variety of these classification techniques are contained in the Toolkit for Multivariate Analysis (TMVA) which enables training, testing, performance evaluation and application of the chosen methods ... More
Presented by Andrew John WASHBROOK on 24 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The ATLAS Distributed Data Management system requires accounting of its contents at the metadata layer. This presents a hard problem due to the large scale of the system and the high rate of concurrent modifications of data. The system must efficiently account more than 80PB of disk and tape that store upwards of 500 million files across 100 sites globally. In this work a generic accounting sys ... More
Presented by Mario LASSNIG on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The LHCb software is based on the Gaudi framework, on top of which are built several large and complex software applications. The LHCb experiment is now in the active phase of collecting and analyzing data and significant performance problems arise in the Gaudi based software beginning from High Level Trigger (HLT) programs and ending with data analysis frameworks (DaVinci). It’s not easy to fin ... More
Presented by Alexander MAZUROV on 22 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Since 2009 when the LHC came back to active service, the Data Quality Monitoring (DQM) team was faced with the need to homogenize and automate operations across all the different environments within which DQM is used for data certification. The main goal of automation is to reduce operator intervention at the minimum possible level, especially in the area of DQM files management, where long-ter ... More
Presented by Luis Ignacio LOPERA GONZALEZ on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
WMAgent is the core component of the CMS workload management system. One of the features of this job managing platform is a configurable messaging system aimed at generating, distributing and processing alerts: short messages describing a given alert-worthy informational or pathological condition. Apart from the framework's sub-components running within the WMAgent instances, there is a stand-alon ... More
Presented by Zdenek MAXA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Physics data libraries play an important role in Monte Carlo simulation systems: they provide fundamental atomic and nuclear parameters, and tabulations of basic physics quantities (cross sections, correction factors, secondary particle spectra etc.) for particle transport. This report summarizes recent efforts for the improvement of the accuracy of physics data libraries, concerning two comple ... More
Presented by Hee SEO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The AliEn workload management system is based on a central job queue wich holds all tasks that have to be executed. The job brokering model itself is based on pilot jobs: the system submits generic pilots to the compuiting centres batch gateways, and the assignment of a real job is done only when the pilot wakes up on the worker node. The model facilitates a flexible fair share user job distribut ... More
Presented by Pablo SAIZ on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
AliEn is the GRID middleware used by the ALICE collaboration. It provides all the components that are needed to manage the distributed resources. AliEn is used for all the computing workflows of the experiment: Montecarlo production, data replication and reconstruction and organixed or chaotic user analysis. Moreover, AliEn is also being used by other experiments like PANDA and CBM. The main c ... More
Presented by Pablo SAIZ on 21 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The CMS all-silicon tracker consists of 16588 modules. Therefore its alignment procedures require sophisticated algorithms. Advanced tools of computing, tracking and data analysis have been deployed for reaching the targeted performance. Ultimate local precision is now achieved by the determination of sensor curvatures, challenging the algorithms to determine about 200k parameters simultaneously. ... More
Presented by Joerg BEHR on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The Mice Analysis User Software (MAUS) for the Muon Ionisation Cooling Experiment (MICE) is a new simulation and analysis framework based on best-practice software design methodologies. It replaces G4MICE as it offers new functionality and incorporates an improved design structure. A new and effective control and management system has been created for handling the simulation geometry within MAUS ... More
Presented by Matthew LITTLEFIELD on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The PH/SFT group at CERN is responsible for developing, releasing and deploying some of the software packages used in the data processing systems of CERN experiments, in particular those at the LHC. They include ROOT, GEANT4, CernVM, Generator Services, and Multi-core R&D (http://sftweb.cern.ch/). We have already submitted a number of abstracts for oral presentations at the conference. Here we req ... More
Presented by Dr. John HARVEY on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The ATLAS event-level metadata infrastructure supports applications that range from data quality monitoring, anomaly detection, and fast physics monitoring to event-level selection and navigation to file-resident event data at any processing stage, from raw through analysis object data, in globally distributed analysis. A central component of the infrastructure is a distributed TAG database, whic ... More
Presented by Dr. Jack CRANSHAW on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
The LHCb collaboration consists of roughly 700 physicists from 52 institutes and universities. Most of the collaborating physicists - including subdetector experts - are not permanently based at CERN. This paper describes the architecture used to publish data internal to the LHCb experiment control- and data acquisition system to the world wide web. Collaborators can access the online (sub-)system ... More
Presented by Markus FRANK on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Accurate and detailed descriptions of the HEP detectors are turning out to be crucial elements of the software chains used for simulation, visualization and reconstruction programs: for this reason, it is of paramount importance to dispose of and to deploy generic detector description tools which allow for precise modeling, visualization, visual debugging and interactivity and which can be used t ... More
Presented by Jochen MEYER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
An automated virtual test environment is a way to improve testing, validation and verification activities when several deployment scenarios must be considered. Such solution has been designed and developed at INFN CNAF to improve software development life cycle and to optimize the deployment of a new software release (sometimes delayed for the difficulties met during the installation and confi ... More
Presented by Luca DELL'AGNELLO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The conversion of photons into electron-positron pairs in the detector material is a nuisance in the event reconstruction of high energy physics experiments, since the measurement of the electromagnetic component of interaction products results degraded. Nonetheless this unavoidable detector effect can be also extremely useful. The reconstruction of photon conversions can be used to probe the dete ... More
Presented by Dr. Domenico GIORDANO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
ALICE, as well as the other experiments at the CERN LHC, has been building a distributed data management infrastructure since 2002. Experience gained during years of operations with different types of storage managers deployed over this infrastructure has shown that the most adequate storage solution for ALICE is the native XRootD manager developed within a CERN - SLAC collaboration. The XRootD st ... More
Presented by Dr. Dagmar ADAMOVA, Mr. Jiri HORKY on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
We describe recent I/O testing frameworks that we have developed and applied within the UK GridPP Collaboration, the ATLAS experiment and the DPM team, for a variety of distinct purposes. These include benchmarking vendor supplied storage products, discovering scaling limits of SRM solutions, tuning of storage systems for experiment data analysis, evaluating file access protocols, and exploring IO ... More
Presented by Wahid BHIMJI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
DIRAC is the Grid solution designed to support LHCb production activities as well as user data analysis. Based on a service-oriented architecture, DIRAC consists of many cooperating distributed services and agents delivering the workload to the Grid resources. Services accept requests from agents and running jobs, while agents run as light-weight components, fulfilling specific goals. Services mai ... More
Presented by Daniela REMENSKA on 24 May 2012 at 13:30
Session: Plenary
Presented by Mr. Jacek BECLA on 23 May 2012 at 10:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
DESY has started to deploy modern, state of the art, industry based, scale out file services together with certain extension as a key component in dedicated LHC analysis environments like the National Analysis Facility (NAF) @DESY. In a technical cooperation with IBM, we will add identified critical features to the standard SONAS product line of IBM to make the system best suited for the already h ... More
Presented by Mr. Martin GASTHUBER on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
A many-parameter fit to extract the the proton structure functions from the Neutral Current deep-inelastic scattering cross sections, measured from the data collected at HERA ep-collider with the ZEUS detector, will be presented. The structure functions F_2 and F_L are extracted as a function of Bjorken-x in bins of virtuality Q2. The fit is performed with the Bayesian Analysis Toolkit (BAT) which ... More
Presented by Julia GREBENYUK on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
In the NOvA experiment, the Detector Controls System (DCS) provides a method for controlling and monitoring important detector hardware and environmental parameters. It is essential for operating the detector and is required to have access to roughly 370,000 independent programmable channels via more than 11,600 physical devices. In this paper, we demonstrate an application of Control System S ... More
Presented by Gennadiy LUKHANIN, Martin FRANK on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Job Execution Monitor (JEM), a job-centric grid job monitoring software, is actively developed at the University of Wuppertal. It leverages Grid-based physics analysis and Monte Carlo event production for the ATLAS experiment by monitoring job progress and grid worker node health. Using message passing techniques, the gathered data can be supervised in real time by users, site admins and shift ... More
Presented by Sergey KALININ on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Cherenkov Telescope Array (CTA) – an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale – is the next generation instrument in the field of very high energy gamma-ray astronomy. CTA will operate as an open observatory providing data products to the scientific community. An average data stream of some GB/s for about 1000 hours of observation p ... More
Presented by Luisa ARRABITO on 22 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The Trigger and DAQ (TDAQ) system of the ATLAS experiment is a very complex distributed computing system, composed of O(10000) of applications running on more than 2000 computers. The TDAQ Controls system has to guarantee the smooth and synchronous operations of all TDAQ components and has to provide the means to minimize the downtime of the system caused by runtime failures, which are inevit ... More
Presented by Dr. Giuseppe AVOLIO on 21 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
Modern particle physics experiments use short pieces of code called triggers'' in order to make rapid decisions about whether incoming data represents potentially interesting physics or not. Such decisions are irreversible and while it is extremely important that they are made correctly, little use has been made in the community of formal verification methodology. The goal of this rese ... More
Presented by Prof. Swain JOHN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented using FPGA and custom ASIC technology, and the High Level Trigger (HLT), implemented running a streamlined version of the CMS offline reconstruction software on a cluster of commercial rack-mounted computers, comprising thousands of CPUs. The design of a software trigger system requires a tradeoff ... More
Presented by Andrea BOCCI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The rising instantaneous luminosity of the LHC poses an increasing challenge to the pattern recognition algorithms for track reconstruction at the ATLAS Inner Detector Trigger. We will present the performance of these algorithms in terms of signal efficiency, fake tracks and execution time, as a function of the number of proton-proton collisions per bunch-crossing, in 2011 data and in simulation. ... More
Presented by Pauline BERNAT on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The LHCb online system relies on a large and heterogeneous IT infrastructure made from thousands of servers on which many different applications are running. They run a great variety of  tasks : critical ones such as data taking and secondary ones like web servers. The administration of such a system and making sure it is working properly represents a very important workload for the  small exper ... More
Presented by Christophe HAEN on 22 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
In 2010, the LHC experiment produced 7 TeV and heavy-ions collisions continually, generating a huge amount of data, which was analyzed and reported throughout several performed studies. Since then, physicists are bringing out papers and conference notes announcing results and achievements. During 2010, 37 papers and 102 conference notes were published and until September 2011 there are already 131 ... More
Presented by Luiz Fernando CAGIANO PARODI DE FRIAS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS experiment at the CERN LHC is one of the largest users of grid computing infrastructure, which is a central part of the experiment's computing operations. Considerable efforts have been made to use grid technology in the most efficient and effective way, including the use of a pilot job based workload management framework. In this model the experiment submits 'pilot' jobs to sites with ... More
Presented by Dr. Jose CABALLERO BEJAR on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The High-Level-Trigger (HLT) cluster of the ALICE experiment is a computer cluster with about 200 nodes and 20 infrastructure machines. In its current state, the cluster consists of nearly 10 different configurations of nodes in terms of installed hardware, software and network structure. In such a heterogeneous environment with a distributed application, information about the actual configuration ... More
Presented by Jochen ULRICH on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses SSB for the distributed computing shifts, for estimating data processing and dat ... More
Presented by Mr. Erekle MAGRADZE on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Cobbler is a network-based Linux installation server, which, via a choice of web or CLI tools, glues together PXE/DHCP/TFTP and automates many associated deployment tasks. It empowers a facility's systems administrators to write scriptable and modular code, which can pilot the OS installation routine to proceed unattended and automatically, even across heterogeneous hardware. These tools make it ... More
Presented by Mr. James PRYOR on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The Muon Ionization Cooling Experiment (MICE) is a demonstration experiment to prove the feasibility of cooling a beam of muons for use in a Neutrino Factory and/or Muon Collider. The MICE cooling channel is a section of a modified Study II cooling channel which will provide a 10% reduction in beam emittance. In order to ensure a reliable measurement, MICE will measure the beam emittance before an ... More
Presented by Pierrick HANLET on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The main goals of data analysis are to infer the parameters of models from data, to draw conclusions on the validity of models, and to compare their predictions allowing to select the most appropriate model. The Bayesian Analysis Toolkit, BAT, is a tool developed to evaluate the posterior probability distribution for models and their parameters. It is centered around Bayes' Theorem and is ... More
Presented by Dr. Daniel KOLLAR on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
A job submission and management tool is one of the necessary components in any distributed computing system. Such a tool should provide a user-friendly interface for physics production group and ordinary analysis users to access heterogeneous computing resources, without requiring knowledge of the underlying grid middleware. Ganga, with its common framework and customizable plug-in structure, is s ... More
Presented by Dr. Xiaomei ZHANG on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Since a couple of years, a team at CERN and partners from the Citizen Cyberscience Centre (CCC) have been working on a project that enables general physics simulation programs to run in a virtual machine on volunteer PCs around the world. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework. Based on CERNVM and the job management framework Co-Pilot, this projec ... More
Presented by Alvaro GONZALEZ ALVAREZ on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ATLAS High Level Trigger (HLT) is organized in two trigger levels running different selection algorithms on heterogeneous farms composed of off-the-shelf processing units. The processing units have varying computing power and can be integrated using diverse network connectivity. The ATLAS working conditions are changing mainly due to the constant increase of the LHC instantaneous luminosity, a ... More
Presented by Marius Tudor MORAR on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
In order to search for new physics beyond the standard model, the next generation of B-factory experiment, Belle II will collect a huge data sample that is a challenge for computing systems. The Belle II experiment, which should commence data collection in 2015, expects data rates 50 times higher than that of Belle. In order to handle this amount of data, we need a new data handling system based o ... More
Presented by Prof. Kihyeon CHO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
A next generation B-factory experiment, Belle II, is now being constructed at KEK in Japan. The upgraded accelerator SuperKEKB is designed to have the maximum luminosity of 8 × 10^35 cm^−2s^−1 that is a factor of 40 higher than the current world record. As a consequence, the Belle II detector yields a data stream of the event size ~1 MB at a Level 1 rate of 30 kHz. The Belle II High Level T ... More
Presented by Soohyung LEE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
In addition to the physics data generated each day from the CMS detector, the experiment also generates vast quantities of supplementary log data. From reprocessing logs to transfer logs this data could shed light on operational issues and assist with reducing inefficiencies and eliminating errors if properly stored, aggregated and analyzed. The term "big data" has recently taken the spotlight wit ... More
Presented by Paul ROSSMAN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
This presentation will cover the work conducted within the ScotGrid Glasgow Tier-2 site. It will focus on the multi-tiered network security architecture developed on the site to augment Grid site server security and will discuss the variety of techniques used including the utilisation of Intrusion Detection systems, logging and optimising network connectivity within the infrastructure. Also th ... More
Presented by Dr. David CROOKS on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Bug tracking is a process which comprises activities of reporting, documenting, reviewing, planning, and fixing software bugs. While there exist many studies on the usage of bug tracking tools and procedures in open source software, the situation in high energy physics has never been looked at in a systematic way. In our study we have compared and analyzed several scientific and non-scientific so ... More
Presented by Benedikt HEGNER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, using a grant from the Italian Ministry of Research. The Consortium aims to the realization of an ad-hoc infrastructure to ease the analysis activities on the huge data set collected by the CMS Experiment, at the LHC Collider ... More
Presented by Giacinto DONVITO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The experimental high energy physics group at the University of Melbourne is a member of the ATLAS, Belle and Belle II collaborations. We maintain a local data centre which enables users to test pre-production code and to do final stage data analysis. Recently the Australian National eResearch Collaboration Tools and Resources (NeCTAR) organisation implemented a Research Cloud based on OpenStack m ... More
Presented by Martin SEVIOR on 22 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
We present CMS' experience in porting its full offline software stack to MacOSX. In the first part we will focus on the system level issues encountered while doing the port, in particular with respect to the different behavior of the compiler and linker in handling common symbols. In the second part we present our progress with an alternative approach of distributing large software projects which ... More
Presented by Mr. Giulio EULISSE
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
Over the last few years, we have seen the broadcast industry moving to mobile devices and to the broadband Internet delivering HD quality. To keep up with the trends, we deployed a new streaming infrastructure. We are now delivering live and on-demand video to all major platforms like Windows, Linux, Mac, iOS and Android running on PC, Smart Phone, Tablet or TV. To optimize the viewing quality an ... More
Presented by Marek DOMARACKY on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The CMS Analysis Tools model has now been used robustly in a plethora of physics papers. This model is examined to investigate successes and failures as seen by the analysts of recent papers.
Presented by Prof. Sudhir MALIK on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
Cathode strip chambers (CSC) compose the endcap muon system of the CMS experiment at the LHC. Two years of data taking have proven that various online systems like Detector Control System (DCS), Data Quality Monitoring (DQM), Trigger, Data Acquisition (DAQ) and other specialized applications are doing their task very well. But the need for better integration between these systems is starting to em ... More
Presented by Evaldas JUSKA on 24 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
CMS experiment possesses distributed computing infrastructure and its performance heavily depends on the fast and smooth distribution of data between different CMS sites. Data must be transferred from the Tier-0 (CERN) to the Tier-1 for storing and archiving, and time and good quality are vital to avoid overflowing CERN storage buffers. At the same time, processed data has to be distributed from T ... More
Presented by Rapolas KASELIS on 21 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The CMS simulation, based on the Geant4 toolkit, has been operational within the new CMS software framework for more than four years. The description of the detector including the forward regions has been completed and detailed investigation of detector positioning and material budget has been carried out using collision data. Detailed modelling of detector noise has been performed and validated w ... More
Presented by Sunanda BANERJEE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It is responsible for the first processing steps of data from the CMS Experiment at CERN. This talk covers the complete overhaul (rewrite) of the system for the 2012 run, to bring it into line with the new CMS Workload Management system, improving scalability and maintainability for the next few years.
Presented by Dirk HUFNAGEL on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The CMS experiment is made of many detectors which in total sum up to more than 75 million channels. The online database stores the configuration data used to configure the various parts of the detector and bring it in all possible running states. The database also stores the conditions data, detector monitoring parameters of all channels (temperatures, voltages), detector quality information, bea ... More
Presented by Dr. Andreas PFEIFFER on 21 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
In operating a complex high energy physics experiment such as CMS, two of the important issues are to record high quality data as efficiently as possible and, correspondingly, to have well validated and certified data in a timely manner for physics analyses. Integrated and user-friendly monitoring systems and coherent information flow play an important role to accomplish this. The CMS integrated c ... More
Presented by Kaori MAESHIMA on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The CMS tracking code is organized in several levels, known as 'iterative steps', each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and clea ... More
Presented by Giacomo SGUAZZONI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
After years of development, the CMS distributed computing system is now in full operation. The LHC continues to set records for instantaneous luminosity, and CMS records data at 300 Hz. Because of the intensity of the beams, there are multiple proton-proton interactions per beam crossing, leading to larger and larger event sizes and processing times. The CMS computing system has responded admirabl ... More
Presented by Kenneth BLOOM on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
In CMS Computing the highest priorities for analysis tools are the improvement of the end users' ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as the data and simulation processing. This strategy foresees tha ... More
Presented by Daniele SPIGA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The European Middleware Initiative (EMI) project aims to deliver a consolidated set of middleware products based on the four major middleware providers in Europe - ARC, dCache, gLite and UNICORE. The CREAM (Computing Resource Execution And Management) Service, a service for job management operation at the Computing Element (CE) level, is one of the software product part of the EMI middleware ... More
Presented by Mr. Massimo SGARAVATTO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The LHCb experiment is dedicated to searching for New Physics effects in the heavy flavour sector, precise measurements of CP violation and rare heavy meson decays. Precise tracking and vertexing around the interaction point is crucial in achieving these physics goals. The LHCb VELO (VErtex LOcator) silicon micro-strip detector is the highest precision vertex detector at the LHC and is locate ... More
Presented by Karol HENNESSY on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The BESIII TOF detector system based on plastic scintillation counters consists of a double layer barrel and two single layer end caps. With the time calibration, the double-layer barrel TOF achieved 78ps time resolution for electrons, and end cap is about 110ps for muons. The attenuation length, effective velocity calibrations and TOF reconstruction are also described. The Kalman filter method is ... More
Presented by Dr. Shengsen SUN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
It is common at research institutions to maintain multiple clusters that represent different owners or generations of hardware, or that fulfill different needs and policies. Many of these clusters are consistently under utilized while researchers on campus could greatly benefit from these unused capabilities. By leveraging principles from the Open Science Grid it is now possible to utilize these r ... More
Presented by Derek John WEITZEL on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of automated services. Puppet is a seasoned, open-source tool designed for enterprise-class centralized configuration management. At the RHIC/ATLAS Computing Facility at Brookhaven National Laboratory, we have adopted Puppet as part of a suite of tools, including Git, GLPI, and some c ... More
Presented by Jason Alexander SMITH on 22 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
In the ATLAS Online computing farm, the majority of the systems are network booted - they run an operating system image provided via network by a Local File Server. This method guarantees the uniformity of the farm and allows very fast recovery in case of issues to the local scratch disks. The farm is not homogeneous and in order to manage the diversity of roles, functionality and hardware of diff ... More
Presented by Georgiana Lavinia DARLEA on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of distributed computing resources. Such resources include commercial computing clouds (e.g. Amazon EC2), scientific computing clouds (e.g. CERN lxcloud), as well as the machines of users participating in volunteer computing projects (e.g. BOINC). The framework consists of components that communicate using t ... More
Presented by Artem HARUTYUNYAN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Grid computing infrastructures need to provide traceability and accounting of their users’ activity and protection against misuse and privilege escalation, where the delegation of privileges in the course of a job submission is a key concern. This work describes an improved handling of multi-user Grid jobs in the ALICE Grid Services. A security analysis of the ALICE Grid job model is presented ... More
Presented by Mr. Steffen SCHREINER on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Oracle-based database applications underpin many key aspects of operations for both the LHC accelerator and the LHC experiments. In addition to overall performance, predictability of response is a key requirement to ensure smooth operations—and delivering predictability requires understanding the applications from the ground up. Fortunately, the Oracle database management system provides several ... More
Presented by Mariusz PIORKOWSKI on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
Cling (http://cern.ch/cling) is a C++ interpreter, built on top of clang (http://clang.llvm.org) and LLVM (http://llvm.org). Like its predecessor CINT, cling offers an interactive, terminal-like prompt. It enables exploratory programming with rapid edit / run cycles. The ROOT team has more than 15 years of experience with C++ interpreters, and this has been fully exploited in the design of clin ... More
Presented by Vasil Georgiev VASILEV on 21 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
With the start-up of the LHC in 2009, more and more data analysis facilities have been built or enlarged at Universities and laboratories. In the mean time, new technologies, like Cloud computing and Web3D, and new types of hardware, like smartphones and tablets, have become available and popular in the market. Is there a way to integrate them into the existing data analysis models and allow physi ... More
Presented by Neng XU on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
ILD is a proposed detector concept for a future linear collider, that envisages a Time Projection Chamber (TPC) as the central tracking detector. The ILD TPC will have a large number of voxels that have dimensions that are small compared to the typical distances between charged particle tracks. This allows for the application of simple nearest neighbor type clustering algorithms to find clean tra ... More
Presented by Frank-Dieter GAEDE on 24 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
Publications in scholarly journals establish the body of knowledge deriving from scientific research; they also play a fundamental role in the career path of scientists and in the evaluation criteria of funding agencies. This presentation reviews the evolution of computing-oriented publications in HEP following the start of operation of LHC. Quantitative analyses are illustrated, which document ... More
Presented by Dr. Maria Grazia PIA on 21 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Collaborative development proved to be a key of the success of the Dashboard Site Status Board (SSB) which is heavily used by ATLAS and CMS for the computing shifts and site commissioning activities. The Dashboard Site Status Board (SSB) is an application that enables Virtual Organisation (VO) administrators to monitor the status of distributed sites. The selection, significance and combination o ... More
Presented by Pablo SAIZ on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Nordic Tier-1 for LHC is distributed over several, sometimes smaller, computing centers. In order to minimize administration effort, we are interested in running different grid jobs over one common grid middleware. ARC is selected as the internal middleware in the Nordic Tier-1. At the moment ARC has no mechanism of automatic software packaging and deployment. The AliEn grid middleware, used ... More
Presented by Boris WAGNER on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
This paper describes the investigative study undertaken to evaluate shared filesystem performance and suitability in the LHCb Online environment. Particular focus is given to the measurements and field tests designed and performed on an in-house AFS setup, and related comparisons with NFSv3 and pNFS are presented. The motivation for the investigation and the test setup arises from the need to serv ... More
Presented by Vijay Kartik SUBBIAH, Niko NEUFELD on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
GridKa, operated by the Steinbuch Centre for Computing at KIT, is the German regional centre for high energy and astroparticle physics computing, supporting currently 10 experiments and serving as a Tier-1 centre for the four LHC experiments. Since the beginning of the project in 2002, the total compute power is upgraded at least once per year to follow the increasing demands of the experiments ... More
Presented by Andreas HEISS on 22 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for ... More
Presented by Dave DYKSTRA on 21 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Constant changes in computational infrastructure like the current interest in Clouds, imply conditions on the design of applications. We must make sure that our analysis infrastructure, including source code and supporting tools, is ready for the on demand computing (ODC) era. This presentation is about a new analysis concept, which is driven by users needs, completely disentangled from the co ... More
Presented by Anar MANAFOV on 22 May 2012 at 13:30
Session: Plenary
Presented by Lennart JOHNSSON on 22 May 2012 at 11:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Belle II experiment, a next-generation B factory experiment at KEK, is expected to record a two orders of magnitude larger data volume than its predecessor, the Belle experiment. The data size and rate are comparable to or more than the ones of LHC experiments and requires to change the computing model from the Belle way, where basically all computing resources were provided by KEK, to a more ... More
Presented by Thomas KUHR on 22 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
There are approximately 60 Tier-3 computing sites located on campuses of collaborating institutions in CMS. We describe the function and architecture of these sites, and illustrate the range of hardware and software options. A primary purpose is to provide a platform for local users to analyze LHC data, but they are also used opportunistically for data production. While Tier-3 sites vary widely in ... More
Presented by Robert SNIHUR on 22 May 2012 at 13:30
Session: Plenary
Presented by Adrian POPE on 24 May 2012 at 08:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
In the ATLAS experiment, database systems generally store the bulk of conditions and configuration data needed by event-wise reconstruction and analysis jobs. These systems can be relatively large stores of information, organized and indexed primarily to store all information required for system-specific use cases and efficiently deliver the required information to event-based jobs. Metadata in ... More
Presented by Elizabeth GALLAS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented using FPGA and custom ASIC technology, and the High Level Trigger (HLT), implemented running a streamlined version of the CMS offline reconstruction software on a cluster of commercial rack-mounted computers, comprising thousands of CPUs. The CMS software is written mostly in C++, using Python a ... More
Presented by Andrea BOCCI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
GridKa is a computing centre located in Karlsruhe. It serves as Tier-1 centre for the four LHC experiments and also provides its computing and storage resources for other non-LHC HEP and astroparticle physics experiments as well as for several communities of the German Grid Initiative D-Grid. The middleware layer at GridKa comprises three main flavours: Globus, gLite and UNICORE. This layer pr ... More
Presented by Dr. Pavel WEBER, Dimitri NILSEN on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
In this paper we present the latest developments introduced in the WNoDeS framework (http://web.infn.it/wnodes); we will in particular describe inter-cloud connectivity, support for multiple batch systems, and coexistence of virtual and real environments on a single hardware. Specific effort has been dedicated to the work needed to deploy a "multi-sites" WNoDeS installation. The goal is to give ... More
Presented by Dr. Giacinto DONVITO, Mr. Alessandro ITALIANO on 22 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
In the distributed computing model of WLCG Grid Storage Elements (SE) are by construction completely decoupled from the File Catalogs (FC) where the experiment's files are registered. On the basis of the experience of managing large volumes of data in such environment, inconsistencies have often happened either causing a waste of disk space, in case the data were deleted from the FC, but still phy ... More
Presented by Elisa LANCIOTTI on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incom ... More
Presented by Dr. Balazs KONYA on 24 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The CMS analysis computing model was always relying on jobs running near the data, with data allocation between CMS compute centers organized at management level, based on expected needs of the CMS community. While this model provided high CPU utilization during job run times, there were times when a large fraction of CPUs at certain sites were sitting idle due to lack of demand, all while Terabyt ... More
Presented by Mr. Igor SFILIGOI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The PHENIX detector system at the Relativistic Heavy Ion Collider (RHIC) was one of the first experiments getting to "LHC-era" data rates in excess of 500 MB/s of compressed data in 2004. In step with new detectors and increasing event sizes and rates, the data logging capability has grown to about 1500MB/s since then. We will explain the strategies we employ to cope with the data volumes in th ... More
Presented by Dr. Martin PURSCHKE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The extensive use of virtualization technologies in cloud environments has created the need for a new network access layer residing on hosts and connecting the various Virtual Machines (VMs). In fact, massive deployment of virtualized environments imposes requirements on networking for which traditional models are not well suited. For example, hundreds of users issuing cloud requests for which ful ... More
Presented by Marco CABERLETTI on 22 May 2012 at 13:30
Session: Plenary
Presented by Dr. Oxana SMIRNOVA on 22 May 2012 at 09:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The ATLAS experiment at CERN is one of the four Large Hadron Collider ex- periments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parame- ters, the propagation of the alarms and the archiving of important operational data in a relational database. DCS Data Viewer (DDV) is an application that provides access to t ... More
Presented by Charilaos TSAROUCHAS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
File replica and metadata catalogs are essential parts of any distributed data management system, which are largely determining its functionality and performance. A new File Catalog (DFC) was developed in the framework of the DIRAC Project that combines both replica and metadata catalog functionality. The DFC design is based on the practical experience with the data management system of the LHCb C ... More
Presented by Dr. Andrei TSAREGORODTSEV on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The DIRAC framework for distributed computing has been designed as a flexible and modular solution that can be adapted to the requirements of any community. Users interact with DIRAC via command line, using the web portal or accessing resources via the DIRAC python API. The current DIRAC API requires users to use a python version valid for DIRAC. Some communities have developed their own softw ... More
Presented by Adrian CASAJUS RAMO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. In this work we will present our evaluation ... More
Presented by Dr. Giacinto DONVITO on 22 May 2012 at 13:30
Session: DPHEP
on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the l ... More
Presented by Ricardo BRITO DA ROCHA on 22 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The Tevatron Collider, located at the Fermi National Accelerator Laboratory, delivered its last 1.96 TeV proton-antiproton collisions on September 30th, 2011. The DZERO experiment continues to take cosmic data for final alignment for several more months . Since Run 2 started, in March 2001, all DZERO data has been collected by the DZERO Level 3 Trigger/DAQ System. The system is a modern, networke ... More
Presented by Gordon WATTS on 24 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The Data Bookkeeping Service 3 (DBS 3) provides an improved event data catalog for Monte Carlo and recorded data of the CMS (Compact Muon Solenoid) experiment at the Large Hadron Collider (LHC). It provides the necessary information used for tracking datasets, like data processing history, files and runs associated with a given dataset on a scale of about 10^5 datasets and more than 10^7 files. Al ... More
Presented by Manuel GIFFELS on 24 May 2012 at 13:30
Session: Plenary
Presented by Dr. David SOUTH on 23 May 2012 at 09:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The Compressed Baryonic Matter (CBM) experiment is intended to run at the FAIR facility that is currently being build at GSI in Darmstadt, Germany. For testing of future CBM detector and readout electronics prototypes, several test beamtimes have been performed at different locations, such as GSI, COSY, and CERN PS. The DAQ software has to treat various data inputs, e.g. standard VME modules on ... More
Presented by Jorn ADAMCZEWSKI-MUSCH on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Super Charm–Tau Factory (CTF) is a future electron-positron collider with center-of-mass energy range from 2 to 5 GeV and unprecedented for this energy range peak luminosity of about 10**35 cm−2s−1. The project of CTF is being developed in the Budker Institute of Nuclear Physics (Novosibirsk, Russia). The main goal of experiments at Super Charm-Tau Factory is a study of the processes with c ... More
Presented by Dr. Ivan LOGASHENKO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
High resolution detectors in high energy nuclear physics deliver a huge amount of data which is often a challenge for the data acquisition and mass storage. Lossless compression techniques on the level of the raw data can provide compression ratios up to a factor of 2. In ALICE, an effective compression factor of >5 for the Time Projection Chamber (TPC) is needed to reach an overall compressi ... More
Presented by Matthias RICHTER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
All major experiments at Large Hadron Collider (LHC) need to measure real storage usage at the Grid sites. This information is equally important for the resource management, planning, and operations. To verify consistency of the central catalogs, experiments are asking sites to provide full list of files they have on storage, including size, checksum, and other file attributes. Such storage du ... More
Presented by Natalia RATNIKOVA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
As part of the Advanced Networking Initiative (ANI) of ESnet, we exercise a prototype 100Gb network infrastructure for data transfer and processing for OSG HEP applications. We present results of these tests.
Presented by Mr. haifeng PI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
In 2012 the GridKa Tier-1 computing center hosts 130kHEPSPEC06 computing resources and 11PB disk and 17.7PB tape space. These resources are shared between the four LHC VOs and a number of national and international VOs from high energy physics and other sciences. CernVM-FS has been deployed at GridKa to supplement the existing NFS-based system to access VO software on the worker nodes. It provides ... More
Presented by Mr. Andreas PETZOLD on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Access protection is one of the cornerstones of security. The rule of least-privilege demands that any access to computer resources like computing services or web applications is restricted in such a way that only users with a need-to can access those resources. Usually this is done when authenticating the user asking her for something she knows, e.g. a (public) username and secret password. Unfor ... More
Presented by Dr. Stefan LUEDERS on 22 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scal ... More
Presented by Dr. Vincenzo CAPONE on 22 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular ... More
Presented by David TUCKETT on 24 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS detector has an average recording rate of about 300 Hz. To reduce the rate of events but still a maintain high efficiency of selecting rare events such as Higgs Boson decays, a three-level trigger system is used in ATLAS. Events are selected based on physics signatures such as events with energetic leptons, photons, ... More
Presented by Yu.nakahama HIGUCHI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
CMSSW (CMS SoftWare) is the overall collection of software and services needed by the simulation, calibration and alignment, and reconstruction modules that process data so that physicists can perform their analysie. It is a long term project, with a large amount of source code. In large scale and complex projects is important to have as up-to-date and automated software documentation as possible. ... More
Presented by Mantas STANKEVICIUS on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest micropro ... More
Presented by Mr. Thomas HAUTH on 24 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas o ... More
Presented by Maxim POTEKHIN on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to d ... More
Presented by Dr. Xavier ESPINAL CURULL on 22 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
For the Super Computing 2011 conference in Seattle, Washington, a 100 Gb/s connection was established between the California Institute of Technology conference booth and the University of Victoria. A small team performed disk to disk data transfers between the two sites nearing 100 Gb/s, using only a small set of properly configured transfer servers equipped with SSD drives.The circuit was e ... More
Presented by Artur Jerzy BARCZYK, Ian GABLE on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. To analyse these data the ATLAS experiment has developed and operates a mature and stable distributed analysis (DA) service on the Worldwide LHC Computing Grid. The service is actively used: more than 1400 users have submitted jobs in the year 2011 and a total of more 1 million jobs ... More
Presented by Johannes ELMSHEUSER on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The Error and Alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN is successfully used for the physics runs at Large Hadron Collider (LHC) during the first three years of activities. Error and alarm processing entails the notification, collection, store and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform ... More
Presented by Andrea PETRUCCI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, ... More
Presented by Wojciech LAPKA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The Double Chooz experiment will measure reactor antineutrino flux from two detectors with a relative normalization uncertainty less than 0.6%. The Double Chooz physical environment monitoring system records conditions of the experiment's environment to ensure the stability of the active volume and readout electronics. The system monitors temperatures in the detector liquids, temperatures and volt ... More
Presented by Ms. Chang PI-JUNG on 24 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems p ... More
Presented by Oliver OBERST on 22 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
A number of storage elements now offer standard protocol interfaces like NFS 4.1/pNFS and WebDAV, for access to their data repositories, in line with the standardization effort of the European Middleware Initiative (EMI). Here we report on work which seeks to exploit the federation potential of these protocols and build a system which offers a unique view of the storage ensemble and the possibilit ... More
Presented by Fabrizio FURANO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ALICE collaboration has developed a production environment (AliEn) that implements several components of the Grid paradigm needed to simulate, reconstruct and analyze data in a distributed way. In addition to the Grid-like analysis, ALICE, as many experiments, provides a local interactive analysis using the Parallel ROOT Facility (PROOF). PROOF is part of the ROOT analysis framework used by ... More
Presented by Cinzia LUZZI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The LHC computing model relies on intensive network data transfers. The E-Center is a social collaborative web based platform for Wide Area network users. It is designed to give user all required tools to isolate, identify and resolve any network performance related problem.
Presented by Mr. Maxim GRIGORIEV on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Operations Portal is a central service being used to support operations in the European Grid Infrastructure: a collaboration of National Grid Initiatives (NGIs) and several European International Research Organizations (EIROs). The EGI Operation Portal is providing a single access point to operational information gathered from various sources such as site topology database, monitoring systems ... More
Presented by Daniel KOURIL, Dr. Mingchao MA, Cyril L'ORPHELIN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The EMI project intends to receive or rent an exhibition spot nearby the main and visible areas of the event (such as coffee-break areas), to exhibit the projects goals and the latest achievements, such as the EMI1 release. The means used will be posters, video and distribution of flyers, sheets or brochures. It would be useful to have a 2x3 booth with panels available to post on posters, and som ... More
Presented by giuseppina SALENTE, Emidlo GIORGIO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
To manage data in the grid, with its jungle of protocols and enormous amount of data in different storage solutions, it is important to have a strong, versatile and reliable data management library. While there are several data management tools and libraries available, they all have different strengths and weaknesses, and it can be hard to decide which tool to use for which purpose. EMI is a co ... More
Presented by Jon Kerr NILSEN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The development and distribution of Grid middleware software projects, as large, complex, distributed systems require a sizeable computing infrastructure for each stage of the software process: for instance pools of machines for building, and testing on several platforms. Software testing and the possibility of implementing realistic scenarios for the verification of grid middleware are a crucial ... More
Presented by Tomasz WOLAK on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and Transi ... More
Presented by Semen LEBEDEV on 24 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
In HEP, scientific research is performed by large collaborations of organizations and individuals. Log book of a scientific collaboration is important part of the collaboration record. Often, it contains experimental data. At FNAL, we developed an Electronic Collaboration Logbook (ECL) application which is used by about 20 different collaborations, experiments and groups at FNAL. ECL is the lates ... More
Presented by Mr. Igor MANDRICHENKO on 22 May 2012 at 14:45
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ALICE Grid infrastructure is based on AliEn, a lightweight open source framework built on Web Services and a Distributed Agent Model in which job agents are submitted onto a grid site to prepare the environment and pull work from a central task queue located at CERN. In the standard configuration, each ALICE grid site supports an ALICE-specific VO box as a single point of contact between the ... More
Presented by Jeff PORTER, Iwona SAKREJDA on 22 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
In the ATLAS computing model, Tier2 resources are intended for MC productions and end-user analyses activities. These resources are usually exploited via the standard GRID resource management tools, which are de facto a high level interface to the underlying batch systems managing the contributing clusters. While this is working as expected, there are user-cases where a more dynamic usage of the r ... More
Presented by Roberto DI NARDO, Elisabetta VILUCCHI on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Grid computing has enabled scientific communities to effectively share computing resources distributed over many independent sites. Several such communities, or Virtual Organizations (VO), in the Open Science Grid and the European Grid Infrastructure use the glideinWMS system to run complex application work-flows. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, ... More
Presented by Parag MHASHILKAR on 22 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Due to the changes occurring within the IPv4 address space, the utilisation of IPv6 within Grid Technologies and other IT infrastructure is becoming a more pressing solution for IP addressing. The employment and deployment of this addressing scheme has been discussed widely both at the academic and commercial level for several years. The uptake is not as advanced as was predicted and the potentia ... More
Presented by Mr. Mark MITCHELL on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Enstore is a mass storage system developed by Fermilab that provides distributed access and management of the data stored on tapes. It uses namespace service, pnfs, developed by DESY to provide filesystem-like view of the stored data. Pnfs is a legacy product and is being replaced by a new implementation, called Chimera, which is also developed by DESY. Chimera namespace offers multiple advantage ... More
Presented by Dr. Dmitry LITVINTSEV on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The CDF experiment at Fermilab ended its Run-II phase on September 2011 after 11 years of operations and 10 fb-1 of collected data. CDF computing model is based on a Central Analysis Farm (CAF) consisting of local computing and storage resources, supported by OSG and LCG resources accessed through dedicated portals. Recently a new portal, Eurogrid, has been developed to effectively exploi ... More
Presented by Ms. Silvia AMERIO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
CTA (Cherenkov Telescope Array) is one of the largest ground-based astronomy projects being pursued and will be the largest facility for ground-based gamma-ray observations ever built. CTA will consist of two arrays (one in the Northern hemisphere and one in the Southern hemisphere) composed of several different sizes of telescopes. A prototype for the Medium Size Telescope (MST) type of a diamete ... More
Presented by Igor OYA on 24 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
40Gb/s network technology is increasingly available today in the data centers as well as in the network backbones. We have built and evaluated storage systems equipped with the last generation of 40GbE Network Interface Cards. The recently available motherboards with the PCIe v3 bus provide the possibility to reach the full 40Gb/s rate per network interface. A fast caching system was built using ... More
Presented by Artur Jerzy BARCZYK, Azher MUGHAL, sandor ROZSA on 24 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Staging data to and from remote storage services on the Grid for users' jobs is a vital component of the ARC computing element. A new data staging framework for the computing element has recently been developed to address issues with the present framework, which has essentially remained unchanged since its original implementation 10 years ago. This new framework consists of an intelligent data tr ... More
Presented by David CAMERON on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
One of the most crucial requirement for online storage is the fast and efficient access to data. Although smart client side caching often compensates for discomforts like latencies and server disk congestion, spinning disks, with their limited ability to serve multi stream random access patterns, seem to be the cause of most of the observed inefficiencies. With the appearance of the differen ... More
Presented by Dr. Patrick FUHRMANN, Dmitry OZEROV on 22 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
EOS is a new disk based storage system used in production at CERN since autumn 2011. It is implemented using the plug-in architecture of the XRootD software framework and allows remote file access via XRootD protocol or POSIX-like file access via FUSE mounting. EOS was designed to fulfill specific requirements of disk storage scalability and IO scheduling performance for LHC analysis use cases. Th ... More
Presented by Dr. Andreas PETERS on 22 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
The PANDA experiment will study the collisions of beams of anti-protons, with momenta ranging from 2-15 GeV/c, with fixed proton and nuclear targets in the charm energy range, and will be built at the FAIR facility. In preparation for the experiment, the PandaRoot software framework is under development for detector simulation, reconstruction and data analysis, running on an Alien2-based grid. The ... More
Presented by Stefano SPATARO on 22 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The electron and photon triggers are among the most widely used triggers in ATLAS physics analyses. In 2011, the increasing luminosity and pile-up conditions demanded higher and higher thresholds and the use of tighter and tighter selections for the electron triggers. Optimizations were performed at all three levels of the ATLAS trigger system. At the high-level trigger (HLT), many variables from ... More
Presented by Liam DUGUID on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The PanDA Production and Distributed Analysis System plays a key role in the ATLAS distributed computing infrastructure. PanDA is the ATLAS workload management system for processing all Monte-Carlo simulation and data reprocessing jobs in addition to user and group analysis jobs. The system processes more than 5 million jobs in total per week, and more than 1400 users have submitted analysis jobs ... More
Presented by Tadashi MAENO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The architecture of the PHENIX data acquisition system will be reviewed, and how it has evolved in 12 years of operation. Custom data acquisition hardware front end modules embedded in the detector operated in a largely inaccessible experimental hall have been controlled and monitored, and a large software infrastructure has been developed around remote objects which are controlled from a relativ ... More
Presented by John HAGGERTY on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
In 2002, the first central CERN service for version control based on CVS was set up. Since then, three different services based on CVS and SVN have been launched and run in parallel; there are user requests for another service based on git. In order to ensure that the most demanded services are of high quality in terms of performance and reliability, services in less demand had to be shut down. Th ... More
Presented by Alvaro GONZALEZ ALVAREZ on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The ATLAS experiment deployed Frontier technology world-wide during the the initial year of LHC collision data taking to enable user analysis jobs running on the World-wide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this pres ... More
Presented by Alastair DEWHURST on 22 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing sof ... More
Presented by Dr. Alexander UNDRUS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Computing Model of the CMS experiment was prepared in 2005 and described in detail in the CMS Computing Technical Design Report. With the experience of the first years of LHC data taking and with the evolution of the available technologies, the CMS Collaboration identified areas where improvements were desirable. In this work we describe the most important modifications that have been, or are ... More
Presented by Claudio GRANDI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics ... More
Presented by Alexey ANISENKOV on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centers. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting ... More
Presented by Simone CAMPANA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
DIRAC framework for distributed computing has been designed as a group of collaborating components, agents and servers, with persistent database back-end. Components communicate with each other using DISET, an in-house protocol that provides Remote Procedure Call (RPC) and file transfer capabilities. This approach has provided DIRAC with a modular and stable design by enforcing stable interfaces a ... More
Presented by Adrian CASAJUS RAMO on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have success ... More
Presented by Daniel Colin VAN DER STER on 22 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The BES III detector is a new spectrometer which works on the upgraded high-luminosity collider, the Beijing Electron-Positron Collider (BEPCII). The BES III experiment studies physics in the tau-charm energy region from 2GeV to 4.6GeV . Since spring 2009, BEPCII has produced large scale data samples. All the data samples were processed successfully and many important physics results have been a ... More
Presented by Dr. ziyan DENG on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Chirp is a distributed file system specifically designed for the wide area network, and developed by the University of Notre Dame CCL group. We describe the design features making it particularly suited to the Grid environment, and to ATLAS use cases. The deployment and usage within ATLAS distributed computing are discussed, together with scaling tests and evaluation for the various use cases.
Presented by Rodney WALKER on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
We present results on different approaches on mounted filesystems in use or under investigation at DESY. dCache, established since long as a storage system for physics data has implemented the NFS v4.1/pNFS protocol. New performance results will be shown with the most current version of the dCache server. In addition to the native usage of the mounted filesystem in a LAN environment, the resu ... More
Presented by Patrick FUHRMANN, Martin GASTHUBER, Yves KEMP, Dmitry OZEROV on 21 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ATLAS experiment is observing proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted rate of several hundred Hz, for an average event size of ~1.2 MB. This paper focuses on the TDAQ data-logging sy ... More
Presented by Marius Tudor MORAR on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a ... More
Presented by Maria ALANDES PRADILLO on 21 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Experiment Dashboard system provides common solutions for monitoring job processing, data transfers and site/service usability. Over the last seven years, it proved to play a crucial role in the monitoring of the LHC computing activities, distributed sites and services. It has been one of the key elements during the commissioning of the distributed computing systems of the LHC experiments. ... More
Presented by Pablo SAIZ on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS Computing Model was designed around the concepts of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present impro ... More
Presented by Fernando Harald BARREIRO MEGINO on 22 May 2012 at 14:20
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. These parameters require a substantial growt ... More
Presented by Marco CORVO on 22 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent ... More
Presented by Norman Anthony GRAF on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The FAZIA project groups together several institutions in Nuclear Physics, which are working in the domain of heavy-ion induced reactions around and below the Fermi energy. The aim of the project is to build a 4Pi array for charged particles, with high granularity and good energy resolution, with A and Z identification capability over the widest possible range. It will use the up-to-da ... More
Presented by Gennaro TORTONE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
FAZIA stands for the Four Pi A and Z Identification Array. This is a project which aims at building a new 4pi particle detector for charged particles. It will operate in the domain of heavy-ion induced reactions around the Fermi energy. It puts together several international institutions in Nuclear Physics. It is planned to be operating with both stable and radioactive nuclear beams. A lar ... More
Presented by Alfonso BOIANO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
A framework for Fast Simulation of particle interactions in the CMS detector has been developed and implemented in the overall simulation, reconstruction and analysis framework of CMS. It produces data samples in the same format as the one used by the Geant4-based (henceforth Full) Simulation and Reconstruction chain; the output of the Fast Simulation of CMS can therefore be used in the analysis i ... More
Presented by Rahmat RAHMAT on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
We present the ATLAS simulation packages ATLFAST-II and ISF. Atlfast-II is a sophisticated fast parametrized simulation in the Calorimeter system in combination with full Geant4 simulation precision in the Inner Detector and Muon Systems. This combination offers a relative increase in speed of around a factor of ten compared to the standard ATLAS detector simulation and is being used to supplemen ... More
Presented by Wolfgang LUKAS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The Fermi Gamma-ray Observatory, including the Large Area Telescope (LAT), was launched June 11, 2008. We are a relatively small collaboration, with a maximum of 25 software developers in our heyday. Within the LAT collaboration we support Redhat Linux, Windows, and are moving towards Mac OS as well for offline simulation, reconstruction and analysis tools. Early on it was decided to use one ... More
Presented by Ms. Heather KELLY on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
FermiCloud is an Infrastructure-as-a-Service facility deployed at Fermilab based on OpenNebula that has been in production for more than a year. FermiCloud supports a variety of production services on virtual machines as well as hosting virtual machines that are used as development and integration platforms. This infrastructure has also been used as a testbed for commodity storage evaluation ... More
Presented by Steven TIMM on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
FermiGrid is the facility that provides the Fermilab Campus Grid with unified job submission, authentication, authorization and other ancillary services for the Fermilab scientific computing stakeholders. We have completed a program of work to make these services resilient to high authorization request rates, as well as failures of building or network infrastructure. We will present the ... More
Presented by Steven TIMM on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
As part of the DOE LQCD-ext project, Fermilab designs, deploys, and operates dedicated high performance clusters for parallel lattice QCD (LQCD) computations. Multicore processors benefit LQCD simulations and have contributed to the steady decrease in price/performance for these calculations over the last decade. We currently operate two large conventional clusters, the older with over 6,800 AMD ... More
Presented by Dr. Don HOLMGREN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. In this paper we will discuss the primary use of AMI which is to provide a catalogue of datasets (file collections) which is searchable using phys ... More
Presented by Elizabeth GALLAS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The BES III experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPC II e+e- collider to study physics in the τ-charm energy region around 3.7 GeV; BEPC II has produced the world’s largest samples of J/ψ and ψ’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very cent ... More
Presented by Caitriana NICHOLSON on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ALICE High-Level Trigger (HLT) is a complex real-time system, whose primary objective is to scale down the data volume read out by the ALICE detectors to at most 4 GB/sec before being written to permanent storage. This can be achieved by using a combination of event filtering, selection of the physics regions of interest and data compression, based on detailed on-line event reconstruction. ALI ... More
Presented by Dinesh RAM on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Desktop grid (DG) is a well known technology aggregating volunteer computing resources donated by individuals to dynamically construct a virtual cluster. A lot of efforts are done these last years to extend and interconnect desktop grids to other distributed computing resources, especially focusing on so called “service grids” middleware such as “gLite”, “ARC” and “Unicore”. In th ... More
Presented by Dr. oelg LODYGENSKY on 22 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
Collaboration Tools, Videoconference, support for large scale scientific collaborations, HD video
Presented by Mr. Philippe GALVEZ on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
The much-heralded exhaustion of the IPv4 networking address space has finally started. While many of the research and education networks have been ready and poised for years to carry IPv6 traffic, there is a well-known lack of academic institutes using the new protocols. One reason for this is an obvious absence of pressure due to the extensive use of NAT or that most currently still have sufficie ... More
Presented by Edoardo MARTELLI on 21 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
PhEDEx is the data-movement solution for CMS at the LHC. Created in 2004, it is now one of the longest-lived components of the CMS dataflow/workflow world. As such, it has undergone significant evolution over time, and continues to evolve today, despite being a fully mature system. Originally a toolkit of agents and utilities dedicated to specific tasks, it is becoming a more open framework tha ... More
Presented by Dr. Tony WILDISH on 22 May 2012 at 13:30
Session: Plenary
Presented by Johan MESSCHENDORP on 24 May 2012 at 09:00
Session: Plenary
Presented by Makoto ASAI on 23 May 2012 at 09:00
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Grid File Access Library ( GFAL ) is a library designed for a universal and simple access to grid storage systems. Re-designed and re-written completely, the 2.0 version of GFAL provides a complete abstraction of the complexity and heterogeneity of the grid storage systems ( DPM, LFC, Dcache, Storm, arc, ...) and of the data management protocols ( RFIO, gsidcap, LFN, dcap, SRM, Http/webdav, ... More
Presented by Adrien DEVRESSE on 22 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
Modern superscalar, out-of-order microprocessors dominate large scale server computing. Monitoring their activity, during program execution, has become complicated due to the complexity of the microarchitectures and their IO interactions. Recent processors have thousands of performance monitoring events. These are required to actually provide coverage for all of the complex interactions and perfor ... More
Presented by Roberto Agostino VITILLO on 21 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
One possible option for the ATLAS High-Level Trigger (HLT) upgrade for higher LHC luminosity is to use GPU-accelerated event processing. In this talk we discuss parallel data preparation and track finding algorithms specifically designed to run on GPUs. We present a "client-server" solution for hybrid CPU/GPU event reconstruction which allows for the simple and flexible integration of th ... More
Presented by Jacob Russell HOWARD on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
New developments on visualization drivers in Geant4 software toolkit
Presented by Mr. Laurent GARNIER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
An overview of the current status of electromagnetic physics (EM) of the Geant4 toolkit is presented. Recent improvements are focused on the performance of large scale production for LHC and on the precision of simulation results over a wide energy range. Significant efforts have been made to improve the accuracy and CPU speed for EM particle transport. New biasing options available for Geant4 EM ... More
Presented by Francisca GARAY WALLS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Current HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a 'single-thread' processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. Writing scalable code in C++ for multicore architectures, while doable, is no panacea. S ... More
Presented by Dr. Sebastien BINET on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a Tier2 center for ALICE@CERN. The central component of the GSI computing facility and h ... More
Presented by Dr. Kilian SCHWARZ on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The primary goal of a Grid information system is to display the current composition and state of a Grid infrastructure. It's purpose is to provide the information required for workload and data management. As these models evolve, the information system requirements need to be revisited and revised. This paper first documents the results from a recent survey of LHC VOs on the information system req ... More
Presented by Mr. Laurence FIELD on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Within the DIRAC framework in the LHCb collaboration, we deployed an autonomous policy system acting as a central status information point for grid elements. Experts working as grid administrators have a broad and very deep knowledge about the underlying system which makes them very precious. We have attempted to formalize this knowledge in an autonomous system able to aggregate information, draw ... More
Presented by Federico STAGNI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The H1 Collaboration at HERA is now in the era of high precision analyses based on the final and complete data sample. A natural consequence of this is the huge increase in requirement for simulated Monte Carlo (MC) events. As a response to this increase, a framework for large scale MC production using the LCG Grid Infrastructure was developed. After 3 years, the H1 MC Computing Framework ... More
Presented by Bogdan LOBODZINSKI on 22 May 2012 at 13:30
Session: Plenary
Presented by Dr. Rene BRUN on 21 May 2012 at 10:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier/Squid and CoralS ... More
Presented by Dr. Andrea VALASSI on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
Data management for a wide category of non-event data plays a critical role in the operation of the CMS experiment. The processing chain (data taking-reconstruction-analysis) relies in the prompt availability of specific, time dependent data describing the state of the various detectors and their calibration parameters, which are treated separately from event data. The Condition Database system is ... More
Presented by Giacomo GOVI on 22 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The CMS experiment online cluster consists of 2300 computers and170 switches or routers operating on a 24 hour basis. This huge infrastructure must be monitored in a way that the administrators are proactively warned of any failures or degradation in the system, in order to avoid or minimize downtime of the system which can lead to loss of data taking. The number of metrics monitored per host vari ... More
Presented by Olivier RAGINEL on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
GSI in Darmstadt (Germany) is a center for heavy ion research. It hosts an Alice Tier2 center and is the home of the future FAIR facility. The planned data rates of the largest FAIR experiments, CBM and Panda, will be similar to those of the current LHC experiments at Cern. gStore is a hierarchical storage system with unique name space and successfully in operation since more than fifteen ... More
Presented by Dr. Horst GöRINGER on 21 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
We present performance study of a high-speed RocketIO receiver card implemented as PCI-express device intended for the use in future luminosity-frontier HEP experiment. To search for a new physics beyond the Standard Model, we start Belle II experiment from 2015 in KEK, Japan. In Belle II, the detector signals are digitized in or nearby the detector complex, and the digitized signals are ... More
Presented by Takeo HIGUCHI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The CMS detector control system (DCS) is responsible for controlling and monitoring the detector status and for the operation of all CMS sub detectors and infrastructure. This is required to ensure safe and efficient data taking, so that high quality physics data can be recorded. The current system architecture is composed of more than 100 servers, in order to provide the required processing resou ... More
Presented by Dr. Giovanni POLESE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 2000 nodes. E ... More
Presented by Dr. Giuseppe AVOLIO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
With many servers and server parts the environment of warehouse sized data centers is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed "hardware hound" focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using ... More
Presented by Miguel COELHO DOS SANTOS on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
A hybrid C++/Python environment built from the standard components is being heavily and successfully used in LHCb, both for off-line physics analysis as well as for the High Level Trigger. The approach is based on the LoKi toolkit and the Bender analysis framework. A small set of highly configurable C++ components allows to describe the most frequirent analysis tasks, e.g. combining and filteri ... More
Presented by Mr. Ivan BELYAEV on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
A critical component of any multicore/manycore application architecture is the handling of input and output. Even in the simplest of models, design decisions interact both in obvious and in subtle ways with persistence strategies. When multiple workers handle I/O independently using distinct instances of a serial I/O framework, for example, it may happen that because of the way data from conse ... More
Presented by Peter VAN GEMMEREN on 21 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The ATLAS Tier3 at IFIC-Valencia is attached to a Tier2 that has 50% of the Spanish Federated Tier2 resources. In its design, the Tier3 includes a GRID-aware part that shares some of the features of Valencia's Tier2 such as using Lustre as a file system. ATLAS users, 70% of IFIC's users, also have the possibility of analysing data with a PROOF farm and storing them locally. In this contribution ... More
Presented by Mr. Miguel VILLAPLANA PEREZ on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The INFN Tier1 at CNAF is the first level Italian High Energy Physics computing center that shares resources to the scientific community using the grid infrastructure. The Tier1 is composed of a very complex infrastructure divided into different parts: the hardware layer, the storage services, the computing resources (i.e. worker nodes adopted for analysis and other activities) and final ... More
Presented by Mr. Pier Paolo RICCI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Computing Centre of the Institute of Physics in Prague provides computing and storage resources for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT. Howe ... More
Presented by Tomas KOUBA on 22 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
GPGPU computing offers extraordinary increases in pure processing power for parallelizable applications. In IceCube we use GPUs for ray-tracing of cherenkov photons in the antarctic ice as part of detector simulation. We report on how we implemented the mixed simulation production chain to include the processing on the GPGPU cluster for the IceCube Monte-Carlo production. We also present ideas to ... More
Presented by Mr. Heath SKARLUPKA on 24 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Due to their production at the early stages, heavy flavor particles are of interest to study the properties of the matter created in heavy ion collisions at RHIC. Previous measurements of $D$ and $B$ mesons at RHIC[1, 2] using semi-leptonic probes show a suppression similar to that of light quarks, which is in contradiction with theoretical models only including gluon radiative energy loss mecha ... More
Presented by Jonathan BOUCHET on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
By the end of 2011, a number of US Department of Energy (DOE) National Laboratories will have access to a 100 Gb/s wide-area network backbone. The ESnet Advanced Networking Initiative (ANI) project is intended to develop a prototype network, based on emerging 100 Gb/s ethernet technology. The ANI network will support DOE’s science research programs. A 100 Gb/s network testbed is a key component ... More
Presented by Dr. Gabriele GARZOGLIO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The BaBar high energy physics experiment acquired data from 1999 until 2008. Soon after the end of data taking, the effort to produce the final dataset started. This final dataset contains over 11x10^9 events, in 1.6x10^6 files, over a petabyte of storage. The Long Term Data Access (LTDA) project aims at the preservation of the BaBar data, analysis tools and documentation to ensure the capability ... More
Presented by Dr. Douglas SMITH on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Neutrino physics research is an important part of FNAL scientific program in post Tevatron era. Neutrino experiments are taking advantage of high beam intensity delivered by the FNAL accelerator complex. These experiments share a common beam infrastructure, and require detailed information about the operation of the beam to perform their measurements. We have designed and implemented a syst ... More
Presented by Mr. Igor MANDRICHENKO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Recent PC servers are equipped with multi-core CPUs and it is desired to utilize the full processing power of them for the data analysis in large scale HEP experiments. A software framework basf2'' is being developed for the use in the Belle II experiment, an upgraded B-factory experiment at KEK, and the parallel event processing is in its design. The framework accepts a set of plug-in functiona ... More
Presented by Prof. Ryosuke ITOH on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The possible implementation of parallel algorithms will be described. - The functionality will be demonstrated using Swarm - a new experimental interactive parallel framework. - The access from several parallel-friendly scripting languages will be shown. - The benchmarks of the typical tasks used in High Energy Physics code will be provided. The talk will concentrate on using the "Fork and Joi ... More
Presented by Dr. Julius HRIVNAC on 24 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement s ... More
Presented by Dr. Domenico GIORDANO, Fernando Harald BARREIRO MEGINO on 24 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
In the past year, the development of ROOT I/O has focused on improving the existing code and increasing the collaboration with the experiments' experts. Regular I/O workshops have been held to share and build upon the varied experiences and points of view. The resulting improvements in ROOT I/O span many dimensions including reduction and more control over the memory usage, drastic reduction in CP ... More
Presented by Mr. Philippe CANAL on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more to come in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which ar ... More
Presented by Federica LEGGER on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
We report on the progress of the multi-core versions of Geant4, including multi-process and multi-threaded Geant4. The performance of the multi-threaded version of Geant4 has been measured, identifying an overhead compared with the sequential version of 20-30%. We explain the reasons, and the improvements introduced to reduce this overhead. In addition we have improved the design of a few k ... More
Presented by Xin DONG, Dr. John APOSTOLAKIS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
During its 20 years of R&D, construction and operation the Phenix experiment at RHIC has accumulated large amounts of proprietary collaboration data that is hosted on many servers around the world and is not open for commercial search engines for indexing and searching.The legacy search infrastructure did not scale well with the fast growing Phenix document base and produced results inadequa ... More
Presented by Irina SOURIKOVA on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The Data-Acquisition System designed by ALICE , which is the experiment dedicated to the study of strongly interacting matter and the quark-gluon plasma at the CERN LHC(Large Hadron Collider), handles the data flow from the sub-detector electronics to the archiving on tape. The software framework of the ALICE data-acquisition system is called DATE (ALICE Data Acquisition and Test Environment) and ... More
Presented by Mrs. Jianlin ZHU on 21 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
What is an EMI Release? What is its life-cycle? How is its quality assured through a continuous integration and large scale acceptance testing? These are the main questions that this article will answer, by presenting the EMI release management process with emphasis on the role played by the Testing Infrastructure in improving the quality of the middleware provided by the project. The European ... More
Presented by Doina Cristina AIFTIMIEI, Danilo DONGIOVANNI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
This work shows the optimizations we have been investigating and implementing at the KVM virtualization layer in the INFN Tier-1 at CNAF, based on more than a year of experience in running thousands of virtual machines in a production environment used by several international collaborations. These optimizations increase the adaptability of virtualization solutions to demanding applications l ... More
Presented by Mr. Andrea CHIERICI on 22 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
Since 2009, the development of Indico has focused on usability, performance and new features, especially the ones related to meeting collaboration. Usability studies have resulted in the biggest change Indico has experienced up to now, a new web layout that makes the user experience better. Performance improvements were also a key goal since 2010; the main features of Indico have been optimized re ... More
Presented by Pedro FERREIRA on 21 May 2012 at 14:20
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
We describe our experience of operating a large Tier-2 site since 2005 and how we have developed an integrated management system using third-party, open source components. This system tracks individual assets and records their attributes such as MAC and IP addresses; derives DNS and DHCP configurations from this database; creates each host's installation and re-configuration scripts; monitors the ... More
Presented by Andrew MCNAB on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
High Energy Physics (HEP) analysis are becoming more complex and demanding due to the large amount of data collected by the current experiments. The Parallel ROOT Facility (PROOF) provides researchers with an interactive tool to speed up the analysis of huge volumes of data by exploiting parallel processing on both multicore machines and computing clusters. The typical PROOF deployment scenario is ... More
Presented by Dr. Ana Y. RODRíGUEZ-MARRERO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The PanDA Workload Management System is the basis for distributed production and analysis for the ATLAS experiment at the LHC. In this role, it relies on sophisticated dynamic data movement facilities developed in ATLAS. In certain scenarios, such as small research teams in ATLAS Tier-3 sites and non-ATLAS Virtual Organizations supported by the Open Science Grid consortium (OSG), the overhead of ... More
Presented by Maxim POTEKHIN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The gUSE (Grid User Support Environment) framework allows to create, store and distribute application workflows. This workflow architecture includes a wide variety of payload execution operations, such as loops, conditional execution of jobs and combination of output. These complex multi-job workflows can easily be created and modified by application developers through the WS-PGRADE portal. The po ... More
Presented by Albert PUIG NAVARRO on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
The Large Hadron Collider (LHC) is currently running at CERN in Geneva, Switzerland. Physicists are using LHC to recreate the conditions just after the Big Bang, by colliding two beams of particles and heavy ions head-on at very high energy. The project is expected to generate 27 TB of raw data per day, plus 10 TB of "event summary data". This data is sent out from CERN to eleven Tier 1 academic i ... More
Presented by Dr. Domenico VICINANZA on 21 May 2012 at 13:30
Session: Plenary
Presented by David GROEP on 25 May 2012 at 12:00
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The density of rack-mount computers is continually increasing, allowing for higher performance processing in smaller and smaller spaces. With the introduction of its new Bulldozer micro-architecture, AMD have made it feasible to run up to 128 cores within a 2U rack-mount space. CPUs based on Bulldozer contain a series of modules, each module containing two processing cores which share some resourc ... More
Presented by Simon William FAYER, Stuart WAKEFIELD on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as ... More
Presented by Gabriele GARZOGLIO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Search for particle trajectories is a basis of the on-line event reconstruction in the heavy-ion CBM experiment (FAIR/GSI, Darmstadt, Germany). The experimental requirements are very high, namely: up to 10^7 collisions per second, up to 1000 charged particles produced in a central collision, a non-homogeneous magnetic field, about 85% of the additional background combinatorial measurements in the ... More
Presented by Igor KULAKOV on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The work is focused on the creation and validation tests of a replica and transfers system for Computational Grids inspired on the needs of the High Energy Physics (HEP). Due to the high volume of data created by the HEP experiments, an efficient file and dataset replica system may play an important role on the computing model. Data replica systems allow the creation of copies, distrib ... More
Presented by Stephen GOWDY on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Jigsaw provides a collection of tools for high-energy physics analyses. In Jigsaw's paradigm input data, analyses and histograms are factorized so that they can be configured and put together at run-time to give more flexibility to the user. Analyses are focussed on physical objects such as particles and event shape quantities. These are distilled from the input data and brought to the analysi ... More
Presented by Riccardo DI SIPIO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Ganga is an easy-to-use frontend for the definition and management of analysis jobs, providing a uniform interface across multiple distributed computing systems. It is the main end-user distributed analysis tool for the ATLAS and LHCb experiments and provides the foundation layer for the HammerCloud sytem, used by the LHC experiments for validation and stress testing of their numerous distributed ... More
Presented by Michael John KENYON on 22 May 2012 at 13:30
Session: Plenary
Presented by Mr. Glen CRAWFORD on 21 May 2012 at 08:45
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Today, every OS in the world requires regular reboots in order to be up to date and secure. Since reboots cause downtime and disruption, sysadmins are forced to choose between security and convenience. Until Ksplice. Ksplice is new technology that can patch a kernel while the system is running, with no disruption whatsoever. We use this technology to provide Ksplice Uptrack, a service that deli ... More
Presented by Waseem DAHER on 22 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The LCG Persistency Framework consists of three software packages (POOL, CORAL and COOL) that address the data access requirements of the LHC experiments in several different areas. The project is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that are using some or all of the Persistency Framework components to access their data. The ... More
Presented by Raffaello TRENTADUE on 22 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
LCIO is a persistency framework and event data model which, as originally presented at CHEP 2003, was developed for the next linear collider physics and detector response simulation studies. Since then, the data model has been extended to also incorporate raw data formats as well as reconstructed object classes. LCIO defines a common abstract user interface (API) and is designed to be lightweight ... More
Presented by Norman Anthony GRAF on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
In the quest to develop a Space Radiation Dosimeter based on the Timepix chip from Medipix2 Collaboration, the fundamental issue is how Dose and Dose-equivalent can be extracted from the raw Timepix outputs. To calculate the Dose-equivalent, each type of potentially incident radiation is given a Quality Factor, also referred to as Relative Biological Effectiveness (RBE). As proposed in the Nationa ... More
Presented by Mr. SON HOANG on 24 May 2012 at 13:30
Session: Plenary
Presented by Prof. Joe INCANDELA on 21 May 2012 at 09:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technolog ... More
Presented by Illya SHAPOVAL on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
We present LHCbDIRAC, an extension of the DIRAC community Grid solution to handle the LHCb specificities. The DIRAC software has been developed for many years within LHCb only. Nowadays it is a generic software, used by many scientific communities worldwide. Each community wanting to take advantage of DIRAC has to develop an extension, containing all the necessary code for handling their specifi ... More
Presented by Federico STAGNI on 22 May 2012 at 13:30
Session: Plenary
Presented by Mr. Andreas Joachim PETERS on 23 May 2012 at 11:00
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Shine is the new offline software framework of the NA61/SHINE experiment at the CERN SPS for data reconstruction, analysis and visualization as well as detector simulation. To allow for a smooth migration to the new framework, as well as to facilitate its validation, our transition strategy foresees to incorporate considerable parts of the old NA61/SHINE reconstruction chain which is based on the ... More
Presented by Oskar WYSZYNSKI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
We recently completed a significant transition in the Open Science Grid in which we moved our software distribution mechanism from the useful but niche system called Pacman to a community-standard native packaged system (RPM). Despite the challenges, this migration was both useful and necessary. In this paper we explore some of the lessons learned during this transition, lessons which we believe a ... More
Presented by Alain ROY on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The recent buzzword in IT world is NoSQL. Major players, such as Facebook, Yahoo, Google, etc. are widely adopted different "NoSQL" solutions for their needs. Horizontal scalability, flexible data model and management of big data volumes are only a few advantages of NoSQL. In CMS experiment we use several of them in production environment. Here we present CMS projects based on NoSQL solutions, the ... More
Presented by Valentin KUZNETSOV on 24 May 2012 at 13:30
Session: Plenary
on 24 May 2012 at 11:00
Session: Plenary
on 25 May 2012 at 11:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
Communication and collaboration using stored digital media has recently garnered increasing interest in many facets of business, government and education. This is primarily due to improvements in the quality of cameras and the speed of computers. Digital media serves as an effective alternative in the absence of physical interaction between multiple individuals. Video recordings that allow for ... More
Presented by Dr. Daniel DETONE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Long-term preservation of scientific data represents a challenge to all experiments. Even after an experiment has reached its end of life, it may be necessary to reprocess the data. There are two aspects of long-term data preservation: "data" and "software". While data can be preserved by migration, it is more complicated for the software. Preserving source code and binaries is not enough; the ful ... More
Presented by Dag LARSEN, Artem HARUTYUNYAN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
In 2008 CERN launched a project aiming at virtualising the batch farm. It strictly distinguishes between infrastructure and guests, and is thus able to serve, along with its initial batch farm target, as an IaaS infrastructure, which can be exposed to users. The system was put into production at small scale at Christmas 2010, and has since grown to almost 500 virtual machine slots in spring 2011. ... More
Presented by Dr. Ulrich SCHWICKERATH on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
MARDI-Gross builds on previous work with the LIGO collaboration, using the ATLAS experiment as a use case to develop a tool-kit on data management for people making proposals for large High Energy Physics experiments, as well a experiments such as LIGO and LOFAR, and also for those assessing such proposals. The toolkit will also be of interest to those in the active data management for new and cur ... More
Presented by Prof. Roger JONES on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
Within the Muon Ionization Cooling Experiment (MICE), the MICE Analysis User Software (MAUS) framework performs both online analysis of live data and detailed offline data analysis, simulation, and accelerator design. The MAUS Map-Reduce API parallelizes computing in the control room, ensures that code can be run both offline and online, and displays plots for users in an easily extendable manner. ... More
Presented by Michael JACKSON on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The Muon Ionization Cooling Experiment (MICE) has developed the MICE Analysis User Software (MAUS) to simulate and analyse experimental data. It serves as the primary codebase for the experiment, providing for online data quality checks and offline batch simulation and reconstruction. The code is structured in a Map-Reduce framework to allow parallelization whether on a personal machine or in the ... More
Presented by Durga RAJARAM on 24 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
In this paper we present a new tool for tuning and validation of Monte Carlo (MC) generators, essential in order to have predictive power in the area of high-energy physics (HEP) experiments. With the first year of LHC data being now analyzed, the need for reliable MC generators is very clear. The tool, called MCPLOTS, is composed of a browsable repository of plots comparing HEP event generators t ... More
Presented by Witold POKORSKI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Parallel job execution in the grid environment using MPI technology presents a number of challenges for the sites providing this support. Multiple flavors of the MPI libraries, shared working directories required by certain applications, special settings for the batch systems make the MPI support difficult for the site managers. On the other hand the workload management systems with pilot jobs bec ... More
Presented by Ms. Vanessa HAMAR on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
Since 2009, the CMS experiment at LHC has provided an intensive training on the use of Physics Analysis Tools (PAT), a collection of common analysis tools designed to share expertise and maximise the productivity in the physics analysis. More than ten one-week courses preceded by prerequisite studies have been organized and the feedback from the participants has been carefully analysed. This note ... More
Presented by Prof. Sudhir MALIK on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
This paper presents the current architecture of the control and safety systems designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). A complete evaluation of both systems performance during all CMS physics data taking periods is reported, with emphasis on how software and hardware solutions have been us ... More
Presented by Diogo Raphael DA SILVA DI CALAFIORI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The increase of luminosity in the LHC during its second year of operation (2011) was achieved by delivering more protons per bunch and increasing the number of bunches. This change of running conditions required some changes in the LHCb Computing Model. The consequences of the higher pileup are a bigger event size and processing time but also the possibility for LHCb to propose and get approved a ... More
Presented by Dr. Stefan ROISER on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The LHCONE project aims to provide effective entry points into a network infrastructure that is intended to be private to the LHC Tiers. This infrastructure is not intended to replace the LHCOPN, which connects the highest tiers, but rather to complement it, addressing the connection needs of the LHC Tier-2 and Tier-3 sites which have become more important in the new less-hierarchical computing mo ... More
Presented by Dr. Daniele BONACORSI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
As elsewhere in today’s computing environment, virtualisation is becoming prevalent in the database management area where HEP laboratories, and industry more generally, seek to deliver improved services whilst simultaneously increasing efficiency. We present here our solutions for the effective management of virtualised databases, building on over five years of experience dating back to studies ... More
Presented by Anton TOPUROV on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The creation and maintenance of a Virtual Machine (VM) is a complex process. To build the VM image, thousands of software packages have to be collected, disk images suitable for different hypervisors have to be built, integrity tests must be performed, and eventually the resulting images have to become available for download. In the meanwhile, software updates for the older versions must be publis ... More
Presented by Ioannis CHARALAMPIDIS on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Installation and post-installation mechanisms are critical points for the computing centres to streamline production services. Managing hundreds of nodes is a challenge for any computing centre and there are many tools able to cope with this problem. The desired features includes the ability to do incremental configuration (no need to bootstrap the service to make it manageable by the tool), simpl ... More
Presented by Dr. Xavier ESPINAL CURULL on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
ALICE (A Large Ion Collider Experiment) is one of the big LHC (Large Hadron Collider) experiments at CERN in Geneve, Switzerland. The experiment is composed of 18 sub-detectors controlled by an integrated Detector Control System (DCS) that is implemented using the commercial SCADA package PVSS. The DCS includes over 1200 network devices, over 1,000,000 input channels and numerous custom made so ... More
Presented by Mateusz LECHMAN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The continued progression of Moore’s law has led to many-core platforms becoming easily accessible commodity equipment. New opportunities that arose from this change have also brought new challenges: harnessing the raw potential of computation of such a platform is not always a straightforward task. This paper describes practical experience coming out of the work with many-core systems at CERN o ... More
Presented by Andrzej NOWAK on 22 May 2012 at 13:30
Type: Poster Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The Bayesian Analysis Toolkit (BAT) is a C++ library designed to analyze data through the application of Bayes' theorem. For parameter inference, it is necessary to draw samples from the posterior distribution within the given statistical model. At its core, BAT uses an adaptive Markov Chain Monte Carlo (MCMC) algorithm. As an example of a challenging task, we consider the analysis of rare B ... More
Presented by Frederik BEAUJEAN on 21 May 2012 at 14:20
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
Three dimensional image reconstruction in medical imaging applies sophisticated filter algorithms to linear trajectories of coincident photon pairs in PET. The goal is to reconstruct an image of a source density distribution. In a similar manner, tracks in particle physics originate from vertices that need to be distinguished from background track combinations. We investigate if methods from med ... More
Presented by Stephan G. HAGEBOECK on 24 May 2012 at 14:45
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
A complex running system, such as the NOvA online data acquisition, consists of a large number of distributed but closely interacting components. This paper describes a generic realtime correlation analysis and event identification engine, named Message Analyzer. Its purpose is to capture run time abnormalities and recognize system failures based on log messages from participating components. The ... More
Presented by Qiming LU on 22 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
We are now in a regime where we observe substantial multiple proton-proton collisions within each filled LHC bunch-crossing and also multiple filled bunch-crossings within the sensitive time window of the ATLAS detector. This will increase with increased luminosity in the near future. Including these effects in Monte Carlo simulation poses significant computing challenges. We present a descriptio ... More
Presented by Andrew HAAS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Presented in this contribution are methods currently developed and used by the ATLAS collaboration to measure the performance of the primary vertex reconstruction algorithms. These methods quantify the amount of additional pile up interactions and help to identify the hard scattering process (the so called primary vertex) in the proton-proton collisions with high accuracy. The correct identificati ... More
Presented by Kirill PROKOFIEV, Dr. Andreas WILDAUER, Simone PAGAN GRISO, Federico MELONI on 24 May 2012 at 13:30
Session: Plenary
From Grid to Cloud: A Perspective
Presented by Sebastien GOASGUEN on 22 May 2012 at 10:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS computing and data models have moved/are moving away from the strict MONARC model (hierarchy) to a mesh model. Evolution of computing models also requires evolution of network infrastructure to enable any Tier2 and Tier3 to easily connect to any Tier1 or Tier2. In this way some changing of the data model are required: a) Any site can replicate data from any other site. b) Dynamic data ... More
Presented by Dr. Santiago GONZALEZ DE LA HOZ on 22 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
A novel architecture is being proposed for the data acquisition and trigger system for PANDA experiment at the HESR facility at FAIR/GSI. The experiment will run without the hardware trigger signal and use timestamps to correlate detector data from a given time window. The broad physics program in combination with high rate of 2 10^7 interactions require very selective filtering algorith ... More
Presented by Dr. Krzysztof KORCYL on 24 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
With LHC producing collisions at larger and larger luminosity, CMS must be able to take high quality data and process them reliably: these tasks need not only correct conditions, but also that those datasets must be promptly available. The CMS condition infrastructure relies on many different pieces, such as hardware, networks, and services, which must be constantly monitored, and any faulty situa ... More
Presented by Salvatore DI GUIDA on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Monitoring of Grid services is essential to provide a smooth experience for users and provide fast and easy to understand diagnostics for administrators running the services. GangliARC makes use of the widely-used Ganglia monitoring tool to present web-based graphical metrics of the ARC computing element. These include statistics of running and finished jobs, data transfer metrics, as well as show ... More
Presented by David CAMERON on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier/Squid and CoralS ... More
Presented by Dr. Andrea VALASSI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly ... More
Presented by Ilija VUKOTIC on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is a Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrume ... More
Presented by Marian BABIK on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The CMS offline computing system is composed of more than 50 sites and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed ... More
Presented by Jorge Amando MOLINA-PEREZ on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
Global scientific collaborations, such as ATLAS, continue to push the network requirements envelope. Data movement in this collaboration is projected to include the regular exchange of petabytes of datasets between the collection and analysis facilities in the coming years. These requirements place a high emphasis on networks functioning at peak efficiency and availability; the lack thereof could ... More
Presented by Shawn MC KEE on 21 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
ALICE (A Large Ion Collider Experiment) is a dedicated heavy ion experiment at the Large Hadron Collider (LHC). The High Level Trigger (HLT) for ALICE is a powerful, sophisticated tool aimed at compressing the data volume and filtering events with desirable physics content. Several of the major detectors in ALICE are incorporated into HLT to compute real-time event reconstruction, for instance th ... More
Presented by Hege Austrheim ERDAL on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
By aggregating the storage capacity of hundreds of sites around the world, distributed data-processing platforms such as the LHC computing grid offer solutions for transporting, storing and processing massive amounts of experimental data, addressing the requirements of virtual organizations as a whole. However, from our perspective, individual workflows require a higher level of flexibility, ease ... More
Presented by Mr. Fabio HERNANDEZ on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
AthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. This has now been validated for production and delivers a significant reduction on overall memory footprint with negligible CPU overhead. Before AthenaMP can be routinely run on the LHC Computing Grid, it must be determined how the compu ... More
Presented by Andrew John WASHBROOK on 21 May 2012 at 16:35
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utili ... More
Presented by Dr. Jose HERNANDEZ CALAMA on 21 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
One of the major goals of the EMI (European Middleware Initiative) project is the integration of several components of the pre-existing middleware (ARC, gLite, UNICORE and dCache) into a single consistent set of packages with uniform distributions and repositories. Those individual middleware projects have been developed in the last decade by tens of development teams and before EMI were all built ... More
Presented by Mr. Andres ABAD RODRIGUEZ on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
New types of hardware, like smartphones and tablets, are becoming more available, affordable and popular in the market. Furthermore with the advent of Web2.0 frameworks, Web3D and Cloud computing, the way we interact, produce and exchange content is being dramatically transformed. How can we take advantage of these technologies to produce engaging applications which can be conveniently used both ... More
Presented by Joao ANTUNES PEQUENAO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The JANA framework has been deployed and in use since 2007 for development of the GlueX experiment at Jefferson Lab. The multi-threaded reconstruction framework is routinely used on machines with up to 32 cores with excellent scaling. User feedback has also helped to develop JANA into a user-friendly environment for development of reconstruction code and event playback. The basic design of JANA wi ... More
Presented by Dr. David LAWRENCE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Fireworks, the event-display program of CMS, was extended with an advanced geometry visualization package. ROOT's TGeo geometry is used as internal representation, shared among several geometry views. Each view is represented by a GUI list-tree widget, implemented as a flat vector to allow for fast searching, selection, and filtering by material type, node name, and shape type. Display of logical ... More
Presented by Alja MRAK TADEL, Matevz TADEL on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The NOvA experiment at Fermi National Accelerator Lab features a free running, continuous readout system without dead time, which collects and buffers time-continuous data from over 350,000 readout channels. The raw data must be searched to correlate it with beam spill events from the NuMI beam facility. They are also analyzed in real-time to identify event topologies of interest. The analysis re ... More
Presented by Andrew NORMAN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Newer generations of processors come with no increase in their clock frequency, and the same is true for memory chips. In order to achieve more performance, the core count is getting higher, and to feed all the cores on a chip with instructions and data, the number of memory channels must follow the same trend. Non Uniform Memory Access (NUMA) architecture allowed the CPU manufacturers to reduce ... More
Presented by Julien LEDUC on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
ROOT's graphics works mainly via the TVirtualX class (this includes both GUI and non-GUI graphics). Currently, TVirtualX has two native implementations based on the X11 and Win32 low-level APIs. To make the X11 version work on OS X we have to install the X11 server (an additional application), but unfortunately, there is no X11 for iOS and so no graphics for mobile devices from Apple - iPhone, iP ... More
Presented by Timur POCHEPTSOV on 24 May 2012 at 13:30
Session: Plenary
Presented by Artur Jerzy BARCZYK on 24 May 2012 at 10:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The read-out from individual pixels on planar semi-conductor sensors are grouped into clusters to reconstruct the location where a charged particle passed through the sensor. The resolution given by individual pixel sizes is significantly improved by using the information from the charge sharing between pixels. Such analog cluster creation techniques have been used by the ATLAS experiment fo ... More
Presented by Andreas SALZBURGER, Giacinto PIACQUADIO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The rate of performance improvements of the LHC at CERN has had strong influence on the characteristics of the monitoring tools developed for the experiments. We present some of the latest additions to the suite of Web Based Monitoring services for the CMS experiment, and explore the aspects that address the roughly 20-fold increase in peak instantaneous luminosity over the course of 2011. ... More
Presented by Irakli CHAKABERIA on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
GENFIT is a framework for track fitting in nuclear and particle physics. Its defining feature is the conceptual independence of the specific detector and field geometry, achieved by modular design of the software. A track in genfit is a collection of detector hits and a collection of track representations.It can contain hits from different detector types (planar hits, space points, isochron ... More
Presented by Mr. Felix Valentin BöHMER on 24 May 2012 at 13:30
Session: Plenary
Presented by Mr. Jeff HAMMERBACHER on 22 May 2012 at 11:00
Session: Plenary
Presented by Ian FISK on 22 May 2012 at 09:00
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
In the last few years, new requirements have been received for visualization of monitoring data: advanced graphics, flexibility in configuration and decoupling of the presentation layer from the monitoring repository. Lemonweb is the data visualization component of the LHC Era Monitoring (Lemon) system. Lemonweb consists of two sub-components: a data collector and a web visualization interface. ... More
Presented by Ivan FEDORKO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The EU-funded project EMI, now at its second year, aims at providing a unified, standardized, easy to install software for distributed computing infrastructures. CREAM is one of the middleware product part of the EMI middleware distribution: it implements a Grid job management service which allows the submission, management and monitoring of computational jobs to local resource management system ... More
Presented by Mr. Massimo SGARAVATTO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
New developments on visualization drivers in Geant4 software toolkit
Presented by Mr. Laurent GARNIER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
ROOT, a data analysis framework, provides advanced numerical and statistical methods via the ROOT Math work package. Now that the LHC experiments have started to analyze their data and produce physics results, we have acquired experience in the way these numerical methods are used and the libraries have been consolidated taking into account also the received feedback. At the same time, new f ... More
Presented by Lorenzo MONETA on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
We present our effort for the creation of a new software library of geometrical primitives, which are used for solid modelling in Monte Carlo detector simulations. We plan to replace and unify current geometrical primitive classes in the CERN software projects Geant4 and ROOT with this library. Each solid is represented by a C++ class with methods suited for measuring distances of particles from t ... More
Presented by Marek GAYER on 24 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Since several years the LHC experiments rely on the WLCG Service Availability Monitoring framework (SAM) to run functional tests on their distributed computing systems. The SAM tests have become an essential tool to measure the reliability of the Grid infrastructure and to ensure reliable computing operations, both for the sites and the experiments. Recently the old SAM framework was replaced wit ... More
Presented by Dr. Andrea SCIABA, Alessandro DI GIROLAMO on 22 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
In recent times, we have witnessed an explosion of video initiatives in the industry worldwide. Several advancements in video technology are currently improving the way we interact and collaborate. These advancements are forcing tendencies and overall experiences: any device in any network can be used to collaborate, in most cases with an overall high quality. To cope with this technology progress ... More
Presented by Marek DOMARACKY on 22 May 2012 at 13:55
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
LHC experiments at CERN and worldwide utilize WLCG resources and middleware components to perform distributed computing tasks. One of the most important tasks is reliable file replication. It is a complex problem, suffering from transfer failures, disconnections, transfer duplication, server and network overload, differences in storage systems, etc. To address these problems, EMI and gLite have pr ... More
Presented by Mr. Zsolt MOLNáR on 24 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The CMS experiment has to move Petabytes of data among dozens of computing centres with low latency in order to make efficient use of its resources. Transfer operations are well established to achieve the desired level of throughput, but operators lack a system to identify early on transfers that will need manual intervention to reach completion. File transfer latencies are sensitive to the und ... More
Presented by Natalia RATNIKOVA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Data analyses based on evaluation of likelihood functions are commonly used in the high-energy physics community for fitting statistical models to data samples. The likelihood functions require the evaluation of several probability density functions on the data. This is accomplished using loops. For the evaluation operations, the standard accuracy is double precision floating point. The probabilit ... More
Presented by Felice PANTALEO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
Large distributed computing collaborations, such as the WLCG, face many issues when it comes to providing a working grid environment for their users.  One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable exchan ... More
Presented by Mr. Kyle GROSS on 24 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
LHCb is one of the 4 experiments at the LHC accelerator at CERN. LHCb has approximately 1600 (8 cores) PCs for processing the High Level Trigger (HLT) during physics data acquisition. During periods when data acquisition is not required or the resources needed for data acquisition are reduced, like accelerator Machine Development (MD) periods or technical shutdowns, most of these PCs are idle or v ... More
Presented by Luis GRANADO CARDOSO on 21 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Neutrino flavor oscillation is characterized by three mixing angles. The Daya Bay reactor antineutrino experiment is designed to determine the last unknown mixing angle $\theta$_{13}. The experiment is located in southern China, near the Daya Bay nuclear power plant. Eight identical liquid scintillator detectors are being installed in three experimental halls, to detect antineutrinos released in n ... More
Presented by Miao HE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The STAR Experiment further exploits scalable message-oriented model principles to achieve a high level of control over online data streams. In this report we present an AMQP-powered Message Interface and Reliable Architecture framework (MIRA), which allows STAR to orchestrate the activities of Metadata Collection, Monitoring, Online QA and several Run-Time / Data Acquisition system components in ... More
Presented by Dmitry ARKHIPKIN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ALICE High Level Trigger (HLT) is a dedicated real-time system for on-line event reconstruction and triggering. Its main goal is to reduce the large volume of raw data that is read out from the detector systems, up to 25 GB/s, by an order of magnitude to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When a reconst ... More
Presented by Artur SZOSTAK on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each of the central servers at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy squid servers, loca ... More
Presented by Dave DYKSTRA on 24 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s ... More
Presented by Hannes SAKULIN on 21 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Reading and writing data onto a disk based high capacity storage system has long been a troublesome task. While disks handle sequential reads and writes well, when they are interleaved performance drops off rapidly due to the time required to move the disk's read-write head(s) to a different position. An obvious solution to this problem is to replace the disks with an alternative storage technolog ... More
Presented by Simon William FAYER, Stuart WAKEFIELD on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
While the model for a Tier2 is well understood and implemented within the HEP Community, a refined design for Analysis specific sites has not been agreed upon as clearly. We aim to describe the solutions adopted at the INFN Pisa, the biggest Tier2 in the Italian HEP Community. A Standard Tier2 infrastructure is optimized for GRID CPU and Storage access, while a more interactive oriented use of the ... More
Presented by Dr. Giuseppe BAGLIESI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
Today's computing elements for software based high level trigger processing (HLT) are based on nodes with multiple cores. Using process based parallelisation to filter particle collisions from the LHCb experiment on such nodes leads to expensive consumption of read-only memory and hence significant cost increase. In the following an approach is presented to fork multiple identical processes from a ... More
Presented by Markus FRANK on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The Python programming language allows objects and classes to respond dynamically to the execution environment. Most of this, however, is made possible through language hooks which by definition can not be optimized and thus tend to be slow. The PyPy implementation of Python includes a tracing just in time compiler (JIT), which allows similar dynamic responses but at the interpreter-, rather than ... More
Presented by Wim LAVRIJSEN on 24 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
DESY is one of the largest WLCG Tier-2 centres for ATLAS, CMS and LHCb world-wide and the home of a number of global VOs. At the DESY-HH Grid site more than 20 VOs are supported by one common Grid infrastructure to allow for the opportunistic usage of federated resources. The VOs share roughly 4800 job slots in 800 physical CPUs of 400 hosts operated by a TORQUE/MAUI batch system. On Tier-2 si ... More
Presented by Andreas GELLRICH on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 dev ... More
Presented by Sylvain CHAPELAND on 24 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
Large-volume physics data storage at CERN is based on two services, CASTOR and EOS: * CASTOR - in production for many years - now handles the Tier0 activities (including WAN data distribution), as well as all tape-backed data; * EOS - in production since 2011 - supports the fast-growing need for high-performance low-latency (i.e. diskonly) data access for user analysis. In 2011, a large part ... More
Presented by Jan IVEN, Massimo LAMANNA on 21 May 2012 at 14:45
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The PanDA Production and Distributed Analysis System is the ATLAS workload management system for processing user analysis, group analysis and production jobs. In 2011 more than 1400 users have submitted jobs through PanDA to the ATLAS grid infrastructure. The system processes more than 2 million analysis jobs per week. Analysis jobs are routed to sites based on the availability of relevant data ... More
Presented by Tadashi MAENO on 24 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
With advent of the analysis phase of LHC data-processing, interest in PROOF technology has considerably increased. While setting up a simple PROOF cluster for basic usage is reasonably straightforward, exploiting the several new functionalities added in recent times may be complicated. PEAC, standing for PROOF Enabled Analysis Cluster, is a set of tools aiming to facilitate the setup and ma ... More
Presented by Gerardo GANIS on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
on behalf of the PLUME Technical Committee <http://projet-plume.org>" for the PLUME abstract. PLUME - FEATHER is a non-profit project created to Promote economicaL, Useful and Maintained softwarE For the Higher Education And THE Research communities. The site references software, mainly Free/Libre Open Source Software (FLOSS) from French universities and national research organisations, (CNRS, ... More
Presented by Dr. Dirk HOFFMANN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The production of simulated samples for physics analysis at LHC represents a noticeable organization challenge, because it requires the management of several thousands different workflows. The submission of a workflow to the grid based computing infrastructure is just the arrival point of a long decision process: definition of the general characteristics of a given set of coherent samples, called ... More
Presented by Dr. Fabio COSSUTTI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
Data analyses based on evaluation of likelihood functions are commonly used in the high energy physics community for fitting statistical models to data samples. These procedures require several evaluations of these functions and they can be very time consuming. Therefore, it becomes particularly important to have fast evaluations. This paper describes a parallel implementation that allows to run c ... More
Presented by Julien LEDUC, Felice PANTALEO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The CBM experiment is a future fixed-target experiment at FAIR/GSI (Darmstadt, Germany). It is being designed to study heavy-ion collisions at extremely high interaction rates. The main tracking detectors are the Micro-Vertex Detector (MVD) and the Silicon Tracking System (STS). Track reconstruction in these detectors is very complicated task because of several factors. Up to 1000 tracks per centr ... More
Presented by Mr. Igor KULAKOV on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
Modern heavy-ion experiments operate with very high data rates and track multiplicities. Because of time constraints the speed of the reconstruction algorithms is crucial both for the online and offline data analysis. Parallel programming is considered nowadays as one of the most efficient ways to increase the speed of event reconstruction. Reconstruction of short-lived particles is one of the ... More
Presented by Mr. Igor KULAKOV on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
Chip multiprocessors are going to support massive parallelism to provide further processing capacities by adding more and more physical and logical cores. Unfortunately the growing number of cores come along with slower advances in speed and size of the main memory, the cache hierarchy, the front- side bus or processor interconnections. Parallelism can only result in performance gain, if ... More
Presented by Stefan LOHN on 24 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
An algorithm is presented which reconstructs helical tracks in a solenoidal magnetic field using a generalized Hough Transform. While the problem of reconstructing helical tracks from the primary vertex can be converted to the problem of reconstructing lines (with 3 parameters), reconstructing secondary tracks requires a full helix to be used (with 5 parameters). The Hough transform memory requi ... More
Presented by Dr. Alan DION on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
A pattern recognition software for a continuously operating high rate Time Projection Chamber with Gas Electron Multiplier amplification (GEM-TPC) has been designed and tested. A track-independent clustering algorithm delivers space points. A true 3-dimensional track follower combines them to helical tracks, without constraints on the vertex position. Fast helix fits, based on a conformal map ... More
Presented by Johannes RAUCH on 24 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The CERN Virtual Machine (CernVM) Software Appliance is a project developed in CERN with the goal of allowing the execution of the experiment's software on different operating systems in an easy way for the users. To achieve this it makes use of Virtual Machine images consisting of a JEOS (Just Enough Operational System) Linux image, bundled with CVMFS, a distributed file system for software. This ... More
Presented by Stephen GOWDY on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
While, historically, Grid Storage Elements have relied on semi-proprietary protocols for data transfer (gridftp for site-to-site, and (rfio/dcap/other) for local transfers) ), the rest of the world has not stood still in providing its own solutions to data access. dCache, DPM and StoRM all now support access via the widely implemented HTTP/WebDAV standard, and dCache and DPM both support NFS4.1/p ... More
Presented by Sam SKIPSEY on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
In 2011 the LHC provided excellent data, the integrated luminosity of about 5fb-1 was more than what was expected. The price for this huge data set is the in and out of time pileup, additional soft events overlaid on top of the interesting event. The reconstruction software is very sensitive to these additional particles in the event, as the reconstruction time increases due to increased combinat ... More
Presented by Rolf SEUSTER on 24 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The ATLAS trigger has been used very successfully to collect collision data during 2009-2011 LHC running at centre of mass energies between 900 GeV and 7 TeV. The three-level trigger system reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 300 Hz. The first level uses custom electronics to reject most background collisions, in less than ... More
Presented by Diego CASADEI on 21 May 2012 at 14:20
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented using FPGA and custom ASIC technology, and the High Level Trigger (HLT), implemented running a streamlined version of the CMS offline reconstruction software on a cluster of commercial rack-mounted computers, comprising thousands of CPUs. The design of a software trigger system requires a tradeoff ... More
Presented by Andrea BOCCI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. CMS experiment relies on File Transfer Services (FTS) for data distribution, a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 ce ... More
Presented by José FLIX on 22 May 2012 at 13:30
Session: Plenary
Presented by Mr. Forrest NORROD on 21 May 2012 at 11:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Historically, HEP event information for final analysis is stored in Ntuples or ROOT Trees and processed using ROOT I/O, usually resulting in a set of histograms or tables. Here we present an alternative data processing framework, leveraging the Protocol Buffer open-source library, developed and used by Google Inc. for loosely coupled interprocess communication and serialization. We save ... More
Presented by Johannes EBKE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Faced with the abundance of geometry models available within the HENP community, long running experiments face a daunting challenge: how to migrate legacy GEANT3 based detector geometries to new technologies, such as the ROOT/TGeo framework [1]. One approach, entertained by the community for some time, is to introduce a level of abstraction: implementing the geometry in a higher order language in ... More
Presented by Dr. Jason WEBB on 24 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
PROOF on Demand (PoD) is a tool-set, which dynamically sets up a PROOF cluster at a user’s request on any resource management system (RMS). It provides a plug-in based system, in order to use different job submission front-ends. PoD is currently shipped with gLite, LSF, PBS (PBSPro/OpenPBS/Torque), Grid Engine (OGE/SGE), Condor, LoadLeveler, and SSH plug-ins. It makes it possible just within a ... More
Presented by Anar MANAFOV on 21 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
This paper describes a user monitoring framework for very large data management systems that maintain high numbers of data movement transactions. The proposed framework prescribes a method for generating meaningful information from collected tracing data that allows the data management system to be queried on demand for specific user usage patterns in respect to source and destination locations, p ... More
Presented by Vincent GARONNE on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Physics models and algorithms operating in the condensed transport scheme - multiple scattering and energy loss of charged particles - play a critical role in the simulation of energy deposition in detectors. Geant4 algorithms pertinent to this domain involve a number of parameters and physics modeling approaches, which have evolved in the course of the years. Results in the literature document ... More
Presented by Gabriela HOFF on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Dark Energy is one of the most intriguing questions in the field of particle physics and cosmology. We expect the first lignt of Hyper Suprime Cam (HSC) at the Subaru Telescope on top of Mauna Kea in Hawaii island in 2012. HSC will measure the shapes of billions of galaxies precisely to construct the 3D map of the dark matter in the universe, characterizing the properties of dark energy. We will d ... More
Presented by Prof. Nobu KATAYAMA on 24 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
Preserving data from past experiments and preserving the ability to perform analysis with old data is of growing importance in many domains of science, including High Energy Physics (HEP). A study group on this issue, DPHEP, has been established in this field to provide guidelines and a structure for international collaboration on data preservation projects in HEP. This contribution aims ... More
Presented by Yves KEMP on 22 May 2012 at 14:20
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The data collected by the LHC experiments are unique and present an opportunity and a challenge for a long-term preservation and re-use. The CMS experiment is defining a policy for the data preservation and access to its data and is starting the implementation of the policy. This note describes the driving principles of the policy and summarises the actions and activities which are planned for its ... More
Presented by Kati LASSILA-PERINI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
C++11 is a new standard for the C++ language that includes several additions to the core language and that extends the C++ standard library. New features, such as move semantics, are expected to bring performance benefits and as soon as these benefits have been demonstrated, it will undoubtedly become widely adopted in the development of HEP code. However it will be shown that this may well be ac ... More
Presented by Axel NAUMANN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
In November 2009, after 15 years of design and installation, the ALICE experiment started to detect and record the first collisions produced by the LHC. It has been collecting hundreds of millions of events ever since with both proton-proton and heavy ion collision. The future scientific programme of ALICE has been refined following the first year of data taking. The physics targeted beyond 2016 w ... More
Presented by Mr. Pierre VANDE VYVRE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Identity management infrastructure has been a key work area for the Open Science Grid (OSG) security team for the past year. The progress of web-based authentication protocols such as openID, SAML, and scientific federations such as InCommon, prompted OSG to evaluate its current identity management infrastructure and propose ways to incorporate new protocols and methods. For the couple of y ... More
Presented by Mine ALTUNAY on 22 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
Project management tools like Trac are commonly used within the open-source community to coordinate projects. The Muon Ionization Cooling Experiment (MICE) uses the project management web application Redmine to host mice.rl.ac.uk. Many groups within the experiment have a Redmine project: analysis, computing and software (including offline, online, controls and monitoring, and database subgroups) ... More
Presented by Linda CONEY on 22 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Abstract: The ATLAS experiment at the LHC collider recorded more than 3 fb-1 data of pp collisions at the center of mass energy of 7 TeV by September 2011. The recorded data are promptly reconstructed in two steps at a large computing farm at CERN to provide fast access to high quality data for physics analysis. In the first step a subset of the collision data corresponding to 10 Hz is processed ... More
Presented by Graeme Andrew STEWART on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The motivation of this work is about the ongoing efforts to integrate the CMS Computing Model with a project of volunteer computing under development at CERN, the LHC@home, thus allowing the CMS Analysis jobs and Monte Carlo production activities to be executed on this paradigm that has a growing user base. The LCH@home project allows the use of the CernVM (a virtual machine technology develope ... More
Presented by Marko PETEK on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
We present the prototype deployment of a private cloud at PIC and the tests performed in the context of providing a computing service for ATLAS. The prototype is based on the OpenNebula open source cloud computing solution. The possibility of using CernVM virtual machines as the standard for ATLAS cloud computing is evaluated by deploying a Panda pilot agent as part of the VM contextualization. D ... More
Presented by Alexey SEDOV on 22 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
We present the prototyping of a 10Gigabit-Ethernet based UDP data acquisition (DAQ) system that has been conceived in the context of the Array and Control group of CTA (Cherenkov Telescope Array). The CTA consortium plans to build the next generation ground-based gamma-ray instrument, with approximately 100 telescopes of at least three different sizes installed on two sites. The genuine camera dat ... More
Presented by Dr. Dirk HOFFMANN on 24 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The WLCG Transfer Dashboard is a monitoring system which aims to provide a global view of the WLCG data transfers and to reduce redundancy of monitoring tasks performed by the LHC experiments. The system is designed to work transparently across LHC experiments and across various technologies used for data transfer. Currently every LHC experiment monitors data transfers via experiment-specific syst ... More
Presented by Julia ANDREEVA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
Searches for new physics by experimental collaborations represent a significant investment in time and resources. Often these searches are sensitive to a broader class of models than they were originally designed to test. It is possible to extend the impact of existing searches through a technique we call 'recasting'. We present RECAST, a framework designed to facilitate the usage of this techniq ... More
Presented by Dr. itay YAVIN on 24 May 2012 at 13:30
Session: Plenary
Presented by Fons RADEMAKERS on 23 May 2012 at 08:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
A JavaScript version of the ROOT I/O subsystem is being developed, in order to be able to browse (inspect) ROOT files in a platform independent way. This allows the content of ROOT files to be displayed in most web browsers, without having to install ROOT or any other software on the server or on the client. This gives a direct access to ROOT files from new (e.g. portable) devices in a light way. ... More
Presented by Bertrand BELLENOT on 24 May 2012 at 14:20
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
ROOT.NET provides an interface between Microsoft’s Common Language Runtime (CLR) and .NET technology and the ubiquitous particle physics analysis tool, ROOT. ROOT.NET automatically generates a series of efficient wrappers around the ROOT API. Unlike pyROOT, these wrappers are statically typed and so are highly efficient as compared to the Python wrappers. The connection to .NET means that one ga ... More
Presented by Gordon WATTS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
We will present new approaches to implementing quality control procedures in the development of the ROOT data processing framework. A multi-platform, cloud-based infrastructure is used for supporting the incremental build and test procedures employed in the ROOT software development process. Tests run continuously and a custom generic tool has been adopted for CPU and heap regression monitoring. ... More
Presented by Axel NAUMANN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2000 hosts running Scientific Linux and Red Hat Enterprise Linux. The ... More
Presented by Christopher HOLLOWELL on 22 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
In the past year several improvements in Geant4 hadronic physics code have been made, both for HEP and nuclear physics applications. We discuss the implications of these changes for physics simulation performance and user code. In this context several of the most-used codes will be covered briefly. These include the Fritiof (FTF) parton string model which has been extended to include an ... More
Presented by Julia YARBA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The final stages of a number of generators of inelastic hadron/ion interactions with nuclei in Geant4 are described by native pre-equilibrium and de-excitation models. The pre-compound model is responsible for pre-equilibrium emission of protons, neutrons and light ions. The de-excitation model provides sampling of evaporation of neutrons, protons and light fragments up to magnesium. Fermi break-u ... More
Presented by Jose Manuel QUESADA MOLINA on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Production and Distributed Analysis system (PanDA) in the ATLAS experiment uses pilots to execute submitted jobs on the worker nodes. The pilots are designed to deal with different runtime conditions and failure scenarios, and support many storage systems. This talk will give a brief overview of the PanDA pilot system and will present major features and recent improvements including CERNVM ... More
Presented by Paul NILSSON on 22 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
Quantitative results on Geant4 physics validation and computational performance are reported: they cover a wide spectrum of electromagnetic and hadronic processes, and are the product of a systematic, multi-disciplinary effort of collaborating physicists, nuclear engineers and statisticians. They involve comparisons with established experimental references in the literature and ad hoc measurements ... More
Presented by Dr. Maria Grazia PIA on 22 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future. There have been significant developments in the area of computer centre and configuration management tools over the last few years. CERN is examining how these modern, widely-used tools can improve the way in which we manage the centre, with a view to reducing the overall operation ... More
Presented by Gavin MCCANCE on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The Detector Control System of the TOTEM experiment at the LHC is built with the industrial product WinCC OA (PVSS). The TOTEM system is generated automatically through scripts using as input the detector PBS structure and pinout connectivity, archiving and alarm meta-information, and some other heuristics based on the naming conventions. When those initial parameters and code are modified to incl ... More
Presented by Fernando LUCAS RODRIGUEZ on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The estimation of the compatibility of large amounts of histogram pairs is a recurrent problem in High Energy Physics. The issue is common to several different areas, from software quality monitoring to data certification, preservation and analysis. Given two sets of histograms, it is very important to be able to scrutinize the outcome of several goodness of fit tests, obtain a clear answer about ... More
Presented by Danilo PIPARO on 24 May 2012 at 13:30
Session: BoF
RIDER is an NSF-funded study (Award #1223688) of the current and future 2020 international data requirements of the science and engineering community, specifically flow of data into the US. Results will assist NSF in predicting future capacity requirements and planning funding for the International Research Network Connections (IRNC) programs. This BoF is an opportunity to provide your inpu ... More
Presented by jill GEMMILL on 24 May 2012 at 15:45
Session: BoF
RIDER is an NSF-funded study (Award #1223688) of the current and future 2020 international data requirements of the science and engineering community, specifically flow of data into the US. Results will assist NSF in predicting future capacity requirements and planning funding for the International Research Network Connections (IRNC) programs. This BoF is an opportunity to provide your inpu ... More
Presented by jill GEMMILL on 24 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
Since starting in 2010, the Large Hadron Collider (LHC) has produced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high efficiency and excellent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which ... More
Presented by Douglas Michael SCHAEFER on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
Detector simulation is one of the most CPU intensive tasks in modern High Energy Physics. While its importance for the design of the detector and the estimation of the efficiency is ever increasing, the amount of events that can be simulated is often constrained by the available computing resources. Various kind of "fast simulations" have been developed to alleviate this problem, however, while su ... More
Presented by Mr. Federico CARMINATI on 24 May 2012 at 14:20
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
The CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future, and in the likely scenario that any extension will be remote from CERN, and in the light of the way other large facilities are today being operated. Over the past six months, CERN has been investigating modern and widely-used tools and procedures used for virtualisation, clouds an ... More
Presented by Tim BELL on 24 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
RooFit is a library of C++ classes that facilitate data modeling in the ROOT environment. Mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. The package provides a flexible framework for building complex fit models through classes that mimic math operators. For all constructed models RooFit provides a concise yet powerful int ... More
Presented by Wouter VERKERKE on 24 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
RooStats is a project providing advanced statistical tools required for the analysis of LHC data, with emphasis on discoveries, confidence intervals, and combined measurements in the both the Bayesian and Frequentist approaches. The tools are built on top of the RooFit data modeling language and core ROOT mathematics libraries and persistence technology. These tools have been developed in col ... More
Presented by Sven KREISS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Nowadays the storage systems are evolving not only in size but also in terms of used technologies. SSD disks are currently introduced in storage facilities for HEP experiments and their performance is tested in comparison with standard magnetic disks. The tests are performed by running a real CMS data analysis for a typical use case and exploiting the features provided by PROOF-Lite, that allows ... More
Presented by Dr. Giacinto DONVITO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Modern experiments in hadron and particle physics are searching for more and more rare decays which have to be extracted out of a huge background of particles. To achieve this goal a very high precision of the experiments is required which has to be reached also from the simulation software. Therefore a very detailed description of the hardware of the experiment is needed including also tiny detai ... More
Presented by Tobias STOCKMANNS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Born in the context of EMI (European Middleware Initiative), the SYNCAT project considers as its main purpose the incremental reduction of the divergence of the content of remote file catalogues, like the ones represented by LFC, the Grid Storage Elements and the experiments' private databases. Aiming at giving ways for these remote systems to interact transparently in order to keep their file me ... More
Presented by Fabrizio FURANO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
By 2009 the Fermilab Mass Storage System had encountered several challenges: 1. The required amount of data stored and accessed in both tiers of the system (dCache and Enstore)had significantly increased. 2. The number of clients accessing Mass Storage System had also increased from tens to hundreds of nodes and from hundreds to thousands of parallel requests. To address these challeng ... More
Presented by Alexander MOIBENKO on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
This contribution describes a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for multiple-VO grid infrastructures. Two goals drove the project: firstly to provide a "native view" of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use ... More
Presented by Jeff TEMPLON on 24 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Serving more than 3 billion accesses per day, the CERN AFS cell is one of the most active installations in the world. Limited by overall cost, the ever increasing demand for more space and higher I/O rates drive an architectural change from small high-end disks organised in fibre-channel fabrics towards external SAS based storage units with large commodity drives. The presentation will ... More
Presented by Arne WIEBALCK on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited knowledge of the administration of such clusters can be strained by such maintenance tasks. ... More
Presented by Valerie HENDRIX on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
This paper reports the design and implementation of a secure, wide area network, distributed filesystem by the ExTENCI project, based on the Lustre filesystem. The system is used for remote access to analysis data from the CMS experiment at the Large Hadron Collider, and from the Lattice Quantum ChromoDynamics (LQCD) project. Security is provided by Kerberos authentication and authorization with a ... More
Presented by Dr. Dimitri BOURILKOV on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Worldwide LHC Computing Grid (WLCG) infrastructure continuously operates thousands of grid services scattered around hundreds of sites. Participating sites are organized in regions and support several virtual organizations, thus creating a very complex and heterogeneous environment. The Service Availability Monitoring (SAM) framework is responsible for the monitoring of this infrastructure. ... More
Presented by Mr. Pedro Manuel RODRIGUES DE SOUSA ANDRADE on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
In the advent of the 12 GeV upgrade at CEBAF, it becomes necessary to create new detectors to accommodate the more powerful beam-line. It follows that new software is needed for tracking, simulation and event display. In the case of CLAS12, the new detector to be installed in Hall B, development has proceeded on new analysis frameworks and runtime environments, such as the Clara (CLAS12 Reconst ... More
Presented by Sebouh PAUL on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
The Information Technology (IT) and the General Services (GS) departments at CERN have decided to combine their extensive experience in support for IT and non-IT services towards a common goal – to bring the services closer to the end user based on ITIL best practice. The collaborative efforts have so far produced definitions for the incident and the request fulfillment processes which are based ... More
Presented by Zhechka TOTEVA on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The LHC experiments' computing infrastructure is hosted in a distributed way across different computing centers in the Worldwide LHC Computing Grid and needs to run with high reliability. It is therefore crucial to offer a unified view to shifters, who generally are not experts in the services, and give them the ability to follow the status of resources and the health of critical systems in order ... More
Presented by Fernando Harald BARREIRO MEGINO, Alessandro DI GIROLAMO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
In BNL, we are planning to establish a federation with different organizations by using a SSO technology - Shibboleth. It provides the underlying mechanism for leveraging institutional authentication and exchanging of user attributes for authorization. This framework will allow us to collaborate not only with organizations inside of BNL but institutions/organizations outside of BNL to be able to a ... More
Presented by Mizuki KARASAWA on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The ATLAS Distributed Data Management system stores more than 75PB of physics data across 100 sites globally. Over 8 million files are transferred daily with strongly varying usage patterns. For performance and scalability reasons it is imperative to adapt and improve the data management system continuously. Therefore future system modifications in hardware, software as well as policy, need to ... More
Presented by Martin BARISITS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The CTA (Cherenkov Telescope Array) project is an initiative to build the next generation ground-based very high energy (VHE) gamma-ray instrument. Compared to current imaging atmospheric Cherenkov telescope experiments CTA will extend the energy range and improve the angular resolution while increasing the sensitivity by a factor of 10. With these capabilities it is expected that CTA will increas ... More
Presented by Peter WEGNER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The Mu2e experiment at Fermilab is in proceeding through its R&D and approval processes. Two critical elements of R&D towards a design that will achieve the physics goals are an end-to-end simulation package and reconstruction code that has reached the stage of an advanced prototype. These codes live within the environment of the experiment's intrastructure software. Mu2e uses art as the infras ... More
Presented by Robert KUTSCHKE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the A ... More
Presented by Mark HODGKINSON, Rolf SEUSTER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ATLAS Cathode Strip Chamber system consists of two end-caps with 16 chambers each. The CSC Readout Drivers (RODs) are purpose-built boards encapsulating 13 DSPs and around 40 FPGAs. The principal responsibility of each ROD is for the extraction of data from two chambers at a maximum trigger rate of 75 kHz. In addition, each ROD is in charge of the setup, control and monitoring of the on-detect ... More
Presented by Raul MURILLO GARCIA on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS Collaboration is managing one of the largest collections of software among the High Energy Physics Experiments. Traditionally this software has been distributed via rpm or pacman packages, and has been installed in every site and user's machine, using more space than needed since the releases could not always share common binaries. As soon as the software has grown in size and number of ... More
Presented by Alessandro DE SALVO on 22 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
The CernVM File System (CernVM-FS) is a read-only file system used to access HEP experiment software and conditions data. Files and directories are hosted on standard web servers and mounted in a universal namespace. File data and meta-data are downloaded on demand and locally cached. CernVM-FS has been originally developed to decouple the experiment software from virtual machine hard disk imag ... More
Presented by Jakob BLOMER on 22 May 2012 at 14:20
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
This is an update on CASTOR (CERN Advanced Storage) describing the recent evolution and related experience in production during the latest high-intensity LHC runs. In order to handle the increasing data rates (10GB/s average for 2011), several major improvements have been introduced. We describe in particular the new scheduling system that has replaced the original CASTOR one. It removed the lim ... More
Presented by Sebastien PONCE on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
The LHC is entering its fourth year of production operation. Many Tier1 facilities can count up to a decade of existence when development and ramp-up efforts are included. LHC computing has always been heavily dependent on high capacity, high performance network facilities for both the LAN and WAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on ... More
Presented by Mr. Andrey BOBYSHEV on 21 May 2012 at 14:20
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The DIRAC Project was initiated to provide a data processing system for the LHCb Experiment at CERN. It provides all the necessary functionality and performance to satisfy the current and projected future requirements of the LHCb Computing Model. A considerable restructuring of the DIRAC software was undertaken in order to turn it into a general purpose framework for building distributed computing ... More
Presented by Dr. Andrei TSAREGORODTSEV on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires tha ... More
Presented by Dr. Tomas LINDEN on 22 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
Traditionally, HEP experiments exploit the multiple cores in a CPU by having each core process one event. However, future PC designs are expected to use CPUs which double the number of processing cores at the same rate as the cost of memory falls by a factor of two. This effectively means the amount of memory per processing core will remain constant. This is a major challenge for LHC processing fr ... More
Presented by Dr. Christopher JONES on 21 May 2012 at 16:35
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. The increasing network performance also in t ... More
Presented by Dr. Armando FELLA on 24 May 2012 at 14:20
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. Since 2009 the SuperB Computing group is wor ... More
Presented by Luca TOMASSETTI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Open Science Grid (OSG) supports a diverse community of new and existing users to adopt and make effective use of the Distributed High Throughput Computing (DHTC) model. The LHC user community has deep local support within the experiments. For other smaller communities and individual users the OSG provides a suite of consulting and technical services through the User Support organization. We ... More
Presented by Dr. Gabriele GARZOGLIO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Failure is endemic in the Grid world - as with any large, distributed computer system, at some point things will go wrong. Wether it is down to a problem with hardware, network or software, the shear size of a production Grid requires operation under the assumption that some of the jobs will fail. Some of those are anavoidable (e.g. network loss during data staging), some are preventable but onl ... More
Presented by Stuart PURDIE on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
TAGs are event-level metadata allowing a quick search for interesting events for further analysis, based on selection criteria defined by the user. They are stored in a file-based format as well as in relational databases. The overall TAG system architecture encompasses a range of interconnected services that provide functionality for the required use cases such as event level selection, display, ... More
Presented by Dr. Jack CRANSHAW on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Fermilab Intensity Frontier experiments like Minerva, NOvA, g-2 and Mu2e currently operate without an organized data handling system, relying instead on completely manual management of files on large central disk arrays at Fermilab. This model severely limits the computing resources that the experiments can leverage to those tied to the Fermilab site, prevents the use of coherent staging and cachi ... More
Presented by Dr. Adam LYON on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The caching, http-mediated filesystem "CVMFS", while first developed for use with the Cern Virtual Machines project, has quickly become a significant part of several VOs software distribution policy, with ATLAS being particularly interested. The benefits of CVMFS do not simply extend to large VOs, however; small virtual organisations can find software distribution to be problematic, as they don't ... More
Presented by Sam SKIPSEY on 22 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
Og, commonly recognized as one of the earliest contributors to experimental particle physics, began his career by smashing two rocks together, then turning to his friend Zog and stating those famous words “oogh oogh”. It was not the rock-smashing that marked HEP’s origins, but rather the sharing of information, which then allowed Zog to confirm the important discovery, that rocks are indeed ... More
Presented by Steven GOLDFARB on 21 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
With currently around 55PB of data stored on over 49000 cartridges, and around 2PB of fresh data coming every month, CERN’s large tape infrastructure is continuing its growth. In this contribution, we will detail out the progress achieved and the ongoing steps towards our strategy of turning tape storage from a HSM environment into a sustainable long-term archiving solution. In particular, we re ... More
Presented by German CANCIO MELIA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The CERN Advanced STORage manager (CASTOR) is used to archive to tape the physics data of past and present physics experiments. Data is migrated (repacked) from older, lower density tapes to newer, high-density tapes approximately every two years to follow the evolution of tape technologies and to keep the volume occupied by the tape cartridges relatively stable. Improving the performance of wri ... More
Presented by Steven MURRAY on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1. This luminosity translate in the requirement ... More
Presented by Dr. Silvio PARDI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The monitoring and alert system is fundamental for the management and the operation of the network in a large data center such as an LHC Tier-1. The network of the INFN Tier-1 at CNAF is a multi-vendor environment: for its management and monitoring several tools have been adopted and different sensors have been developed. In this paper, after an overview on the different aspects to be moni ... More
Presented by Donato DE GIROLAMO, Stefano ZANI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The monitoring and alert system is fundamental for the management and the operation of the network in a large data center such as an LHC Tier-1. The network of the INFN Tier-1 at CNAF is a multi-vendor environment: for its management and monitoring several tools have been adopted and different sensors have been developed. In this paper, after an overview on the different aspects to be moni ... More
Presented by Mr. Donato DE GIROLAMO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ALICE detector yields a huge sample of data, via millions of channels from different sub-detectors. On-line data processing must be applied to select and reduce the data volume in order to increase the significant information in the stored data. ALICE applies a multi-level hardware trigger scheme where fast detectors are used to feed a three-level deep chain, L0-L2. The High-Level Trigger ( ... More
Presented by Federico RONCHETTI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The 18 ALICE sub-detectors are regularly calibrated in order to achieve most accurate physics measurements. Some of these procedures are done online in the DAQ (Data Acquisition System) so that calibration results ... More
Presented by Sylvain CHAPELAND on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, involving many Computing Centres spread around the world. The computing workload is managed by regional federations, called Clouds. The Italian Cloud consists of a main (Tier-1) centre, located in Bologna, four secondary (Tier-2) centres, and a few smaller (Tier-3) sites. ... More
Presented by Lorenzo RINALDI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The DDM Tracer Service is aimed to trace and monitor the atlas file operations on the Worldwide LHC Computing Grid. The volume of traces has increased significantly since the service started in 2009. Now there are about ~5 million trace messages every day and peaks of greater than 250Hz, with peak rates continuing to climb, which gives the current service structure a big challenge. Analysis of la ... More
Presented by Vincent GARONNE on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The ATLAS collaboration has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Together with experimental data generated from RAW and complimentary simulation data, and accounting for data replicas on the grid, a total of 74TB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, ... More
Presented by Vincent GARONNE on 21 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
ATLAS decided to move from a globally distributed file catalogue to a central instance at CERN. This talk describes the ATLAS LFC merge exercise from the analysis phase over the prototyping and stress testing  to the final execution phase. We demonstrate that with careful preparation even major architectural changes could be implemented while minimizing the impact on the experiments production ... More
Presented by Fabrizio FURANO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ATLAS Level-1 Trigger is the first stage of event selection for the ATLAS experiment at the LHC. In order to identify the interesting collisions events to be passed on to the next selection stage within a latency of less than 2.5 us, it is based on custom-built electronics. Signals from the Calorimeter and Muon Trigger System are combined in the Central Trigger Processor which processes the ov ... More
Presented by Will BUTTINGER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ATLAS experiment at CERN's Large Hadron Collider (LHC) has taken data with colliding beams at instantaneous luminosities of 2*10^33 cm^-2 s^-1. The LHC targets to deliver an integrated luminosity 5-fb in the run period 2011 at luminosities of up to 5*10^33 cm^-2 s^-1, which requires dedicated strategies to guard the highest physics output while reducing effectively the event rate. The muon ... More
Presented by Alexander OH on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
We detail recent changes to ROOT-based I/O within the ATLAS experiment. The ATLAS persistent event data model continues to make considerable use of a ROOT I/O backend through POOL persistency. Also ROOT is used directly in later stages of analysis that make use of a flat-ntuple based "D3PD" data-type. For POOL/ROOT persistent data, several improvements have been made including implementation of au ... More
Presented by Wahid BHIMJI on 21 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, data replications to other computing centers, etc. The Oracle Relational Database Management System has been addressing the ATLAS database requirements to ... More
Presented by Gancho DIMITROV on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The ATLAS experiment has collected vast amounts of data with the arrival of the inverse-femtobarn era at the LHC. ATLAS has developed an intricate analysis model with several types of derived datasets, including their grid storage strategies, in order to make data from O(109) recorded events readily available to physicists for analysis. Several use cases have been considered in the ATLAS analysis ... More
Presented by Amir FARBIN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
BESIII/BEPCII is a major upgrade of the BESII experiment at the Beijing Electron-Positron Collider (BEPC) for studies of hadron spectroscopy and tau-charm physics. The BESIII detector adopts a small cell helium-based drift chamber (MDC) as the cetral tracking detector. The momentum resolution was deteriorated due to misalignment in the data taking. In order to improve the momentum resolution, a so ... More
Presented by Linghui WU on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Providing computer infrastructure to end-users in an efficient and user-friendly way was always a big challenge in the IT market. “Cloud computing” is an approach that addresses these issues and recently it has been gaining more and more popularity. A well designed Cloud Computing system gives elasticity in resources allocation and allows for efficient usage of computing infrastructure. The un ... More
Presented by Mr. Milosz ZDYBAL on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. Th ... More
Presented by Andrei Cristian SPATARU on 24 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and monte carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing ... More
Presented by Dr. Stuart WAKEFIELD on 21 May 2012 at 14:20
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The Compact Muon Solenoid (CMS) is a CERN multi-purpose experiment that exploits the physics of the Large Hadron Collider (LHC). The Detector Control System (DCS) ensures a safe, correct and efficient experiment operation, contributing to the recording of high quality physics data. The DCS is programmed to automatically react to the LHC changes. CMS sub-detector’s bias voltages are set depending ... More
Presented by Robert GOMEZ-REINO GARRIDO on 21 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
DESY is one of the world-wide leading centers for research with particle accelerators, synchrotron light and astroparticles. DESY participates in LHC as a Tier-2 center, supports on-going analyzes of HERA data, is a leading partner for ILC, and runs the National Analysis Facility (NAF) for LHC and ILC in the framework of the Helmholtz Alliance, Physics at the Terascale. For the research with synch ... More
Presented by Andreas HAUPT on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Since mid of 2010, the Scientific Computing department at DESY is operating a storage and data access evaluation laboratory, DESY Grid Lab, equipped with 256 CPU cores, and about 80 Tbytes of data distributed among 5 servers and interconnected via up to 10-GiGE links. The system has been dimensioned to be equivalent to the size of a medium WLCG Tier 2 center to provide commonly exploitable res ... More
Presented by Yves KEMP, Dmitry OZEROV on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often d ... More
Presented by Jason ZURAWSKI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Keeping track of the layout of the informatic resources in a big datacenter is a complex task. DOCET is a database-based webtool designed and implemented at INFN. It aims at providing a uniform interface to manage and retrieve needed information about one or more datacentre, such as available hardware, software and their status. Having a suitable application is however useless until most ... More
Presented by Dr. Stefano DAL PRA on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
At CERN, and probably elsewhere, centralised Oracle-database services deliver high levels of service performance and reliability but are sometimes perceived as overly rigid and inflexible for initial application development. As a consequence a number of key database applications are running on user-managed MySQL database services. This is all very well when things are going well, but the user-mana ... More
Presented by Ruben Domingo GASPAR APARICIO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by usin ... More
Presented by Kerstin LANTZSCH on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Double Chooz reactor anti-neutrino experiment have developed a automatised system for data streaming from the detector site to the different nodes of data analysis in Europe, Japan and USA. The system both propagates and triggers the processing of data as it goes through low level data analysis. All operations (propagation and processing) are tracked file-wise in real time using DB (MySQL base ... More
Presented by Mr. Kazuhiro TERAO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The Double Chooz reactor antineutrino experiment employs a network-distributed DAQ divided among a number of computing nodes on a Local Area Network. The Double Chooz Online Monitor Framework has been developed to provide short-timescale, real-time monitoring of multiple distributed DAQ subsystems and serve diagnostic information to multiple clients. Monitor information can be accessed via ... More
Presented by Mr. Arthur FRANKE on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The Double Chooz experiment searches for reactor neutrino oscillations at the Chooz nuclear power plant. A client/server model is used to coordinate actions among several online systems over TCP/IP sockets. A central run control server synchronizes data-taking among two independent data acquisition (DAQ) systems via a common communication protocol and state machine definition. Calibration subsy ... More
Presented by Matthew TOUPS on 24 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The Double Chooz (DC) reactor anti-neutrino experiment consists of a neutrino detector and a large area Outer Veto detector. A custom data-acquisition (DAQ) system written in Ada language for all the sub-detector in the neutrino detector systems and a generic object oriented data acquisition system for the Outer Veto detector were developed. Generic object-oriented programming was also used to sup ... More
Presented by Matt TOUPS on 22 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
A large experiment like ATLAS at LHC (CERN), with over three thousand members and a shift crew of 15 people running the experiment 24/7, needs an easy and reliable tool to gather all the information concerning the experiment development, installation, deployment and exploitation over its lifetime. With the increasing number of users and the accumulation of stored information since the experiment s ... More
Presented by Luca MAGNONI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper desc ... More
Presented by Dr. Scott TEIGE on 22 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
The FairRoot framework is an object oriented simulation, reconstruction and data analysis framework based on ROOT. It includes core services for detector simulation and offline analysis. The project started as a software framework for the CBM experiment at GSI, and later became the standard software for simulation, reconstruction and analysis for CBM, PANDA, R3B and ASYEOS at GSI/FAIR, as well as ... More
Presented by Dr. florian UHLIG on 22 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Data Handling Pipeline ("Pipeline") has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration a ... More
Presented by Mr. Stephan ZIMMER on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds ... More
Presented by Andrea NEGRI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The Virtual Monte Carlo (VMC) provides the abstract interface into the Monte Carlo transport codes: GEANT3, Geant4 and FLUKA. The user VMC based application, independent from the specific Monte Carlo codes, can be then run with all three simulation programs. The VMC has been developed by the ALICE Offline Project and since then it draw attention in more experimental frameworks. Since its first ... More
Presented by Dr. Ivana HRIVNACOVA on 24 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
The storage solution currently used in production at the INFN Tier-1 at CNAF, is the result of several years of case studies, software development and tests. This solution, called the Grid Enabled Mass Storage System (GEMSS), is based on a custom integration between a fast and reliable parallel filesystem (IBM GPFS), with a complete integrated tape backend based on TIVOLI TSM Hierarchical storage ... More
Presented by Mr. Pier Paolo RICCI on 24 May 2012 at 14:20
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
The H1 data preservation project was started in 2009 as part of the global data preservation in high-energy physics (DPHEP) initiative. In order to retain the full potential for future improvements, the H1 collaboration aims for level 4 of the DPHEP recommendations, requiring the full simulation and reconstruction chain to be available for analysis. A major goal of the H1 project is theref ... More
Presented by Michael STEDER on 24 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The HEPiX Virtualisation Working Group has sponsored the development of policies and technologies that permit Grid sites to safely instantiate remotely generated virtual machine images confident in the knowledge that they will be able to meet their obligations, most notably in terms of guaranteeing the accountability and traceability of any Grid Job activity at their site. We will present the c ... More
Presented by Tony CASS on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
We discuss the steps and efforts required to secure the continued analysis and data access for the HERMES experiment after the end of the active collaboration period. The model for such an activity has been developed within the framework of the DPHEP initiative in a close collaboration of HERA experiments and the DESY IT. For HERMES the preservation scheme foresees a possibility of full data pro ... More
Presented by Eduard AVETISYAN on 24 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Mad ... More
Presented by Steve BARNET on 24 May 2012 at 13:55
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The increasing availability of cloud resources is making the scientific community to consider a choice between Grid and Cloud. The DIRAC framework for distributed computing is an easy way to obtain resources from both systems. In this paper we explain the integration of DIRAC with a two Open-source Cloud Managers, OpenNebula and CloudStack. They are computing tools to manage the complexity a ... More
Presented by Victor Manuel FERNANDEZ ALBOR, Victor MENDEZ MUNOZ on 22 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The LCG Applications Area relies on regular integration testing of the provided software stack. In the past, regular builds have been provided using a system which has been changed and developed constantly adding new features like server-client communication, long-term history of results and a summary web interface using present-day web technologies. However, the ad-hoc style of software devel ... More
Presented by Mr. Victor DIEZ GONZALEZ on 24 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The LHCb Data Management System is based on the DIRAC Grid Community Solution. LHCbDirac provides extensions to the basic DMS such as a Bookkeeping System. Datasets are defined as sets of files corresponding to a given query in the Bookkeeping system. Datasets can be manipulated by CLI tools as well as by automatic transformations (removal, replication, processing). A dynamic handling of dataset r ... More
Presented by Philippe CHARPENTIER on 21 May 2012 at 14:45
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The Muon Ionization Cooling Experiment (MICE) is designed to test transverse cooling of a muon beam, demonstrating an important step along the path toward creating future high intensity muon beam facilities. Protons in the ISIS synchrotron impact a titanium target, producing pions which decay into muons that propagate through the beam line to the MICE cooling channel. Along the beam line, partic ... More
Presented by Linda CONEY on 22 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The configuration database (CDB) is the memory of the Muon Ionisation Cooling Experiment (MICE). Its principle aim is to store temporal data associated with the running conditions of the experiment. These data can change on a per run basis (e.g. magnet currents, high voltages), or on long time scales (e.g. cabling, calibration, and geometry). These data are used throughout the life cycle of experi ... More
Presented by Dr. Antony WILSON on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The Tile Calorimeter (TileCal), one of the ATLAS detectors. has four partitions, where each one contains 64 modules and each module has up to 48 PhotoMulTipliers (PMTs), totalizing more than 10,000 electronic channels. The Monitoring and Calibration Web System (MCWS) supports data quality analyses at channels level. This application was developed to assess the detector status and verify its perfor ... More
Presented by Andressa SIVOLELLA GOMES on 24 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The NOvA experiment at Fermi National Accelerator Lab, has been designed and optimized to perform a suite of measurements critical to our understanding of the neutrino’s properties, their oscillations and their interactions. NOvA presents a unique set of data acquisition and computing challenges due to the immense size of the detectors, the data volumes that are generated through the continuous ... More
Presented by Andrew NORMAN on 22 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The NOvA experiment at Fermi National Accelerator Lab, uses a sophisticated timing distribution system to perform synchronization of more than 12,000 front end readout and data acquisition systems at both the near detector and accelerator complex located at Fermilab and at the far detector located 810km away at Ash River, MN. This global synchronization is performed to an absolute clock time with ... More
Presented by Andrew NORMAN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) is an experiment at the CERN SPS using the upgraded NA49 hadron spectrometer. Among its physics goals are precise hadron production measurements for improving calculations of the neutrino beam flux in the T2K neutrino oscillation experiment as well as for more reliable simulations of cosmic-ray air showers. Moreover, p+p, p+Pb and nucleus+ ... More
Presented by Roland SIPOS on 24 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
As it enters adolescence the Open Science Grid (OSG) is bringing a maturing fabric of Distributed High Throughput Computing (DHTC) services that supports an expanding HEP community to an increasingly diverse spectrum of domain scientists. Working closely with researchers on campuses throughout the US and in collaboration with national cyberinfrastructure initiatives, we transform their computing ... More
Presented by Mrs. Ruth PORDES on 24 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
Pandora is a robust and efficient framework for developing and running pattern-recognition algorithms. It was designed to perform particle flow calorimetry, which requires many complex pattern-recognition techniques to reconstruct the paths of individual particles through fine granularity detectors. The Pandora C++ software development kit (SDK) consists of a single library and a number of careful ... More
Presented by Dr. John MARSHALL on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
PhEDEx is the data-transfer management solution written by CMS. It consists of agents running at each site, a website for presentation of information, and a web-based data-service for scripted access to information. The website allows users to monitor the progress of data-transfers, the status of site agents and links between sites, and the overall status and behaviour of everything about PhEDE ... More
Presented by Dr. Tony WILDISH on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
A Grid is a geographically distributed environment with autonomous sites that share resources collaboratively. In this context, the main issue within a Grid is encouraging site to site interactions, increasing the trust, confidence and reliability of the sites to share resources. To achieve this, the trust concept is vital component in every service transaction, and needs to be applied in the all ... More
Presented by Mrs. Jianlin ZHU on 22 May 2012 at 17:00
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but disk space limitations mean that only 3 kHz can be written to tape for offline processing. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. In contrast to previous ... More
Presented by Mariusz WITEK on 21 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
To configure data taking run the ATLAS systems and detectors store more than 150 MBytes of data acquisition related configuration information in OKS[1] XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems with configuring of a run caused by XML syntax errors or inconsistent st ... More
Presented by Mr. Igor SOLOVIEV on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The messaging service currently used by WLCG is operated by EGI and consists of four tightly coupled brokers running ActiveMQ and designed to host the Grid operational tools such as SAM. This service is successfully being used by seve ... More
Presented by Lionel CONS, Massimo PALADIN on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The WNoDeS software framework (http://web.infn.it/wnodes) uses virtualization technologies to provide access to a common pool of dynamically allocated computing resources. WNoDeS can process batch and interactive requests, in local, Grid and Cloud environments. A problem of resource allocation in Cloud environments is the time it takes to actually allocate the resource and make it available to ... More
Presented by Daniele ANDREOTTI, Gianni DALLA TORRE on 22 May 2012 at 13:30
Type: Parallel Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
We present the development and first experience of a new component (termed WorkQueue) in the CMS workload management system. This component provides a link between a global request system (Request Manager) and agents (WMAgents) which process requests at compute and storage resources (known as sites). These requests typically consist of creation or processing of a data sample (possibly terabytes in ... More
Presented by Dr. Stuart WAKEFIELD on 22 May 2012 at 13:30
Type: Parallel Session: Collaborative tools
Track: Collaborative tools (track 6)
In this talk, we will explain how CERN digital library services have evolved to deal with the publication of the first results of the LHC. We will describe the work-flow of the documents on CERN Document Server and the diverse constraints relative to this work-flow. We will also give an overview on how the underlying software, Invenio, has been enriched to cope with special needs. In a second p ... More
Presented by Ludmila MARIAN on 21 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
A project to allow long term access and physics analysis of ZEUS data (ZEUS data preservation) has been established in collaboration with the DESY-IT group. In the ZEUS approach the analysis model is based on the Common Ntuple project, under development since 2006. The real data and all presently available Monte Carlo samples are being preserved in a flat ROOT ntuple format. There is ongoi ... More
Presented by Katarzyna WICHMANN on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
Future "Intensity Frontier" experiments at Fermilab are likely to be conducted by smaller collaborations, with fewer scientists, than is the case for recent "Energy Frontier" experiments. *art* is an event-processing framework designed with the needs of such experiments in mind. The authors have been involved with the design and implementation of frameworks for several experiments, including D ... More
Presented by Dr. Marc PATERNO on 21 May 2012 at 17:00
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
OSG has been operating for a few years at UCSD a glideinWMS factory for several scientific communities, including CMS analysis, HCC and GLOW. This setup worked fine, but it had become a single point of failure. OSG thus recently added another instance at Indiana University, serving the same user communities. Similarly, CMS has been operating a glidein factory dedicated to reprocessing activities a ... More
Presented by Mr. Igor SFILIGOI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to d ... More
Presented by Dr. Santiago GONZALEZ DE LA HOZ on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Resources of large computer centers used in physics computing today. are optimised for the WLCG framework and reflect the typical data access footprint of reconstruction and analysis. A traditional Tier 1 centre like GridKa at KIT hosts thousands of hosts and many PetaBytes of disk and tape storage that is used mostly by a single community. The required size as well as the intrinsic difficulties ... More
Presented by Jos VAN WEZEL on 22 May 2012 at 13:30
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
As the mainstream computing world has shifted from multi-core to many-core platforms, the situation for software developers has changed as well. With the numerous hardware and software options available, choices balancing programmability and performance are becoming a significant challenge. The expanding multiplicative dimensions of performance offer a growing number of possibilities that need to ... More
Presented by Andrzej NOWAK on 24 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The Controls Middleware (CMW) project was launched over ten years ago. Its main goal was to unify middleware solutions used to operate CERN accelerator complex. A key part of the project, the equipment access library RDA, was based on CORBA, an unquestionable middleware standard at the time. RDA became an operational and critical part of the infrastructure, yet the demanding run-time environment r ... More
Presented by Andrzej DWORAK on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middlewa ... More
Presented by Andrej FILIPCIC on 22 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data at unprecedented energy and rates. The TDAQ is composed of three levels which reduces the event rate from the design bunch-crossing rate of 40 MHz to an average event recording rate of about 200 Hz. The first p ... More
Presented by Andrea NEGRI on 21 May 2012 at 15:10
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing as well as WLCG deployment and operations need to evolve. As part of the activities of the Experiment Support group in CERN’s IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutio ... More
Presented by Dr. Maria GIRONE on 21 May 2012 at 15:10
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
In this paper we will describe primarily the experience of going through an EU procurement. We will describe what a PQQ (Pre-Qualification Questionaire) is and some of the requirments for vendors such as ITIL and PRINCE2 project management qualifications. We will describe how the technical part was written including requirements from the main users and the university logistic requirements to the i ... More
Presented by Alessandra FORTI on 22 May 2012 at 13:30
Type: Poster Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
For more than a year, the ATLAS Western Tier 2 (WT2) at SLAC National Accelerator has been successfully operating a two tiered storage system based on Xrootd's flexible cross-cluster data placement framework, the File Residency Manager. The architecture allows WT2 to provide both, high performance storage at the higher tier to ATLAS analysis jobs, as well as large, low cost disk capacity at the lo ... More
Presented by Wei YANG, Andrew HANUSHEVSKY on 22 May 2012 at 13:55
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The final step in a HEP data-processing chain is usually to reduce the data to a `tuple' form which can be efficiently read by interactive analysis tools such as ROOT. Often, this is implemented independently by each group analyzing the data, leading to duplicated effort and needless divergence in the format of the reduced data. ATLAS has implemented a common toolkit for performing this processin ... More
Presented by Scott SNYDER on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
In the ATLAS experiment the collection, processing, selection and conveyance of event data from the detector front-end electronics to mass storage is performed by the ATLAS online farm consisting of more than 3000 PCs with various characteristics. To assure the correct and optimal working conditions the whole online system must be constantly monitored. The monitoring system should be able to check ... More
Presented by Georgiana Lavinia DARLEA on 22 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the ... More
Presented by Dr. Andrea SCIABA, Lothar A.T. BAUERDICK on 22 May 2012 at 17:50
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behavior of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site R ... More
Presented by José FLIX on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The Large Hadron Collider (LHC) at CERN is the world's largest particle accelerator, which collides proton beams at an unprecedented centre of mass energy of 7 TeV. ATLAS is a multipurpose experiment that records the products of the LHC collisions. In order to reconstruct the trajectories of charged particles produced in these collisions, ATLAS is equipped with a tracking system (Inner Detec ... More
Presented by Anthony MORLEY on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
The Silicon Vertex Detector (SVD) of the Belle II experiment is a newly developed device with four measurement layers. The detector is designed to enable track reconstruction down to the lowest momenta possible, in order to significantly increase the effective data sample and the physics potential of the experiment. Both track finding and track fitting have to deal with these requirements. We ... More
Presented by Mr. Moritz NADLER, Rudolf FRUHWIRTH, Jakob LETTENBICHLER on 24 May 2012 at 15:10
Session: Plenary
Presented by Tony JOHNSON on 24 May 2012 at 11:30
Session: Plenary
Presented by Johannes ELMSHEUSER on 25 May 2012 at 09:00
Session: Plenary
Presented by Dr. Adam LYON on 25 May 2012 at 09:30
Session: Plenary
Presented by Andreas HEISS on 25 May 2012 at 10:30
Session: Plenary
Presented by Remi MOMMSEN on 25 May 2012 at 08:30
Session: Plenary
Presented by David LANGE on 25 May 2012 at 11:00
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The track and vertex reconstruction algorithms of the ATLAS Inner Detector have demonstrated excellent performance in the early data from the LHC. However, the rapidly increasing number of interactions per bunch crossing introduces new challenges both in computational aspects and physics performance. We will discuss the strategy adopted by ATLAS in response to this increasing multiplicity by balan ... More
Presented by Heather GRAY, Simone PAGAN GRISO, Christoph WASICKI on 24 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
The high data rates expected from the planned detectors at FAIR (CBM, PANDA) call for dedicated attention with respect to the computing power needed in online (e.g. High level event selection) and offline analysis. The graphics processor units (GPUs) have evolved into high performance co-processors that can be easily programmed with common high-level language such as C, Fortran and C++. Todays GPU ... More
Presented by Dr. Mohammad AL-TURANY on 24 May 2012 at 17:25
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore signific ... More
Presented by Johannes MATTMANN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
Hadronic tau decays play a crucial role in taking Standard Model measurements as well as in the search for physics beyond the Standard Model. However, hadronic tau decays are difficult to identify and trigger on due to their resemblance to QCD jets. Given the large production cross section of QCD processes, designing and operating a trigger system with the capability to efficiently select hadronic ... More
Presented by Patrick CZODROWSKI on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
In the large LHC experiments the majority of computing resources are provided by the participating countries. These resource pledges account for more than three quarters of the total available computing. The experiments are asked to give indications of their requests three years in advance and to evolve these as the details and constraints become clearer. In this presentation we will discuss the r ... More
Presented by Dr. Peter KREUZER on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
In this paper we will present the efforts carried out in the UK to fix the WAN transfers problem highlighted by the ATLAS sonar tests. We will present the work done at site level, the monitoring tools at local level on the machines (ifstat, tcpdump, netstat...), between sites (iperf) and at FTS level monitoring. We will describe the effort to setup a mini-mesh to simplify the sonar tests setup sep ... More
Presented by Alessandra FORTI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The ATLAS Online farm is a non-homogeneous cluster of more than 3000 PCs which run the data acquisition, trigger and control of the ATLAS detector. The systems are configured and monitored by a combination of open-source tools, such as Quattor and Nagios, and tools developed in-house, such as ConfDB. We report on the ongoing introduction of new provisioning and configuration tools, Puppet and ... More
Presented by Georgiana Lavinia DARLEA on 22 May 2012 at 13:30
Type: Parallel Session: Online Computing
Track: Online Computing (track 1)
The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC restarts after the 2013/14 shut-down, the current compute nodes and networking infrastructure will have reached the end of their lifetime. We are presenting design studies for an upgrade of ... More
Presented by Andrea PETRUCCI on 21 May 2012 at 17:00
Session: Plenary
Presented by Wesley SMITH on 21 May 2012 at 11:00
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Modern HEP related calculations have traditionally been beyond the capabilities of donated desktop machines, particularly because of complex deployment of the needed software. The popularization of efficient virtual machine technology and in particular the CernVM appliance, that allows for only the needed subset of the ATLAS software environment to be dynamically downloaded, has made such computa ... More
Presented by Anders WAANANEN on 22 May 2012 at 13:30
Type: Parallel Session: Event Processing
Track: Event Processing (track 2)
Modern HEP analysis requires multiple passes over large datasets. For example, one has to first reweight the jet energy spectrum in Monte Carlo to match data before you can make plots of any other jet related variable. This requires a pass over the Monte Carlo and the Data to derive the reweighting, and then another pass over the Monte Carlo to plot the variables you are really interested in. With ... More
Presented by Gordon WATTS on 22 May 2012 at 14:45
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Data storage and access represent the key of CPU-intensive and data-intensive high performance Grid computing. Hadoop is an open-source data processing framework that includes, fault-tolerant and scalable, distributed data processing model and execution environment, named MapReduce, and distributed file system, named Hadoop distributed file system (HDFS). HDFS was deployed and tested within the ... More
Presented by Hassen RIAHI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
We describe the work on creating system images of Lustre virtual clients in the ExTENCI project, using several virtual technologies (KVM, XEN, VMware). These virtual machines can be built at several levels, from a basic Linux installation (we use Scientific Linux 5 as an example), adding a Lustre client with Kerberos authentication, and up to complete clients including local or distributed (based ... More
Presented by Dr. Dimitri BOURILKOV on 22 May 2012 at 13:30
Type: Parallel Session: Computer Facilities, Production Grids and Networking
Track: Computer Facilities, Production Grids and Networking (track 4)
While the LHC data movement systems have demonstrated the ability to move data at the necessary throughput, we have identified two weaknesses: the latency for physicists to access data and the complexity of the tools involved. To address these, both ATLAS and CMS have begun to federate regional storage systems using Xrootd. Xrootd, referring to a protocol and implementation, allows us to provide ... More
Presented by Brian Paul BOCKELMAN on 22 May 2012 at 14:20
Type: Poster Session: Poster Session
Track: Collaborative tools (track 6)
Particle physics conferences and experiments generate a huge number of plots and presentations. It is impossible to keep up. A typical conference (like CHEP) will have 100's of plots. A single analysis result from a major experiment will have almost 50 plots. Scanning a conference or sorting out what plots are new is almost a full time job. The advent of multi-core computing and advanced video car ... More
Presented by Gordon WATTS on 24 May 2012 at 13:30
Session: Plenary
Presented by Philippe GALVEZ on 23 May 2012 at 11:30
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The Visual Physics Analysis (VISPA) project addresses the typical development cycle of (re-)designing, executing, and verifying an analysis. It presents an integrated graphical development environment for physics analyses, using the Physics eXtension Library (PXL) as underlying C++ analysis toolkit. Basic guidance to the project is given by the paradigms of object oriented programming, data flo ... More
Presented by Prof. Martin ERDMANN on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The current ATLAS Tier3 infrastructure consists of a variety of sites of different sizes and with a mix of local resource management systems (LRMS) and mass storage system (MSS) implementations. The Tier3 monitoring suite, having been developed in order to satisfy the needs of Tier3 site administrators and to aggregate Tier3 monitoring information on the global VO level, needs to be validated for ... More
Presented by Mikalai KUTOUSKI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
In this paper we present the Geant4 validation and testing suite. The application is used to test any new Geant4 release. The simulation of a particularly demanding use-case (High Energy Physics calorimeters) is tested with different physics parameters. The suite is integrated with a job submission system that allows for the generation of high statistics data-sets on distributed resources. The ... More
Presented by Andrea DOTTI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
Virtualization techniques have become a key topic in computing in the last years. In the Grid, discussions on the virtualization of worker nodes is most prominent. Currently, concepts for the provenience and sharing if images are under debate. The virtualization of Grid servers though is already a common and successful practice. At DESY, one of the largest WLCG Tier-2 centres world-wide and hom ... More
Presented by Andreas GELLRICH on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
In this presentation we will address the development of a prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM for virtualization, and the Condor batch software to manage virtual machines. The discussion provides details on our experiences with building, configuring, and deploying the various components from bare metal, including the base OS, the virtualized OS ... More
Presented by William STRECKER-KELLOGG on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Computer Facilities, Production Grids and Networking (track 4)
The LCG (Worldwide LHC Computing Grid) is a grid-based hyerarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar si ... More
Presented by Ivano Giuseppe TALAMO on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
In production Grid infrastructures deploying EMI (European Middleware Initiative) middleware release, the Workload Management System (WMS) is the service responsible for the distribution of user tasks to the remote computing resources. Monitoring the reliability of this service, the job lifecycle and the workflow pattern generated by different user communities is an important and challenging activ ... More
Presented by Danilo DONGIOVANNI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The Disk Pool Manager (DPM) and LCG File Catalog (LFC) are two grid data management components currently used in production at more than 240 sites. Together with a set of grid client tools they give the users a unified view of their data, hiding most details concerning data location and access. Recently we've put a lot of effort in developing a reliable and high performance HTTP/WebDAV fronten ... More
Presented by Alejandro ALVAREZ AYLLON, Ricardo BRITO DA ROCHA on 22 May 2012 at 13:30
Session: Plenary
Presented by Michael ERNST on 21 May 2012 at 08:30
Session: Plenary
Presented by Dr. Paul HORN on 21 May 2012 at 08:35
Type: Poster Session: Poster Session
Track: Software Engineering, Data Stores and Databases (track 5)
The EMI project is based on the collaboration of four major middleware projects in Europe, all already developing middleware products and having their pre-existing strategies for developing, releasing and controlling their software artefacts. In total, the EMI project is made up of about thirty development individual teams, called “Product Teams” in EMI. A Product Team is responsible for the e ... More
Presented by Maria ALANDES PRADILLO on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The EU-funded project EMI, now at its second year, aims at providing a unified, high quality middleware distribution for e-Science communities. Several aspects about workload management over diverse distributed computing environments are being challenged by the EMI roadmap: enabling seamless access to both HTC and HPC computing services, implementing a commonly agreed framework for the execution o ... More
Presented by Marco CECCHI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The XRootD server framework is becoming increasingly popular in the HEP community and beyond due to its simplicity, scalability and capability to construct distributed storage federations. With the growing adoption and new use cases emerging, it has become clear that the XRootD client code has reached a stage, where a significant refactoring of the code base is necessary to remove, by now, unneed ... More
Presented by Lukasz JANYST on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
During spring and summer 2011 CMS deployed Xrootd front-end servers on all US T1 and T2 sites. This allows for remote access to all experiment data and is used for user-analysis, visualization, running of jobs at T2s and T3s when data is not available at local sites, and as a fail-over mechanism for data-access in CMSSW jobs. Monitoring of Xrootd infrastructure is implemented on three levels. O ... More
Presented by Matevz TADEL on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Online Computing (track 1)
The online event selection is crucial to reject most of the events containing uninteresting background collisions while preserving as much as possible the interesting physical signals. The b-jet selection is part of the trigger strategy of the ATLAS experiment and a set of dedicated triggers is in place from the beginning of the 2011 data-taking period and is contributing to keep the total bandwid ... More
Presented by Alexander OH on 24 May 2012 at 13:30
Type: Parallel Session: Distributed Processing and Analysis on Grids and Clouds
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. Over that time, it has satisfied the requirements of various demanding scientific user communities to store their data, transfer it between sites and fast, site-local access. When the dCache project started, the focus was on ... More
Presented by Paul MILLAR on 24 May 2012 at 16:35
Type: Parallel Session: Software Engineering, Data Stores and Databases
Track: Software Engineering, Data Stores and Databases (track 5)
dCache is a high performance scalable storage system widely used by HEP community. In addition to set of home grown protocols we also provide industry standard access mechanisms like WebDAV and NFSv4.1. This support places dCache as a direct competitor to commercial solutions. Nevertheless conforming to a protocol is not enough; our implementations must perform comparably or even better than com ... More
Presented by Mr. Tigran MKRTCHYAN on 22 May 2012 at 16:35
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
After about two years of data taking with the ATLAS detector manifold experience with the custom-developed trigger monitoring and reprocessing infrastructure could be collected. The trigger monitoring can be roughly divided into online and offline monitoring. The online monitoring calculates and displays all rates at every level of the trigger and evaluates up to 3000 data quality histograms. T ... More
Presented by Diego CASADEI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
Multi-user pilot infrastructures provide significant advantages for the communities using them, but also create new security challenges. With Grid authorization and mapping happening with the pilot credential only, final user identity is not properly addressed in the classic Grid paradigm. In order to solve this problem, OSG and EGI have deployed glexec, a privileged executable on the worker nod ... More
Presented by Mr. Igor SFILIGOI on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Distributed Processing and Analysis on Grids and Clouds (track 3)
The hBrowse framework is a generic monitoring tool designed to meet the needs of various communities connected to grid computing. It is strongly configurable and easy to adjust and implement accordingly to a specific community needs. It's a html/JavaScript client side application utilizing the latest web technologies to provide presentation layer to any hierarchical data structures. Each part of t ... More
Presented by Lukasz KOKOSZKIEWICZ on 22 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
iSpy is a general-purpose event data and detector visualization program that was developed as an event display for the CMS experiment at the LHC and has seen use by the general public and teachers and students in the context of education and outreach. Central to the iSpy design philosophy is ease of installation, use, and extensibility. The application itself uses the open-access packages Qt4 a ... More
Presented by Dr. Thomas MC CAULEY on 24 May 2012 at 13:30
Type: Poster Session: Poster Session
Track: Event Processing (track 2)
slic: Geant4 simulation program As the complexity and resolution of particle detectors increases, the need for detailed simulation of the experimental setup also increases. Designing experiments requires efficient tools to simulate detector response and optimize the cost-benefit ratio for design options. We have developed efficient and flexible tools for detailed physics and detector re ... More
Presented by Norman Anthony GRAF on 24 May 2012 at 13:30
Type: Poster Session: