Gero Müller
(RWTH Aachen University)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable client-server system allows analyses to be performed...
John Bland
(University of Liverpool)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
Liverpool is consistently amongst the top Tier-2 sites in Europe in terms of efficiency and cluster utilisation. This presentation will cover the work done at Liverpool over the last six years to maximise and maintain efficiency and productivity at their Tier 2 site, with an overview of the tools used (including established, emerging, and locally developed solutions) for monitoring, testing,...
Philipp Sitzmann
(Goethe University Frankfurt)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
CMOS Monolithic Active Pixel Sensors (MAPS) have demonstrated excellent performances as tracking detectors for charged particles. Their outstanding spatial resolution (few µm), ultra-light material budget (50 µm) and advanced radiation tolerance (> 1Mrad, >1e13 neq/cm²). They were therefore chosen for the vertex detectors of STAR and CBM and are foreseen to equip the upgraded ALICE-ITS. They...
Vincenzo Spinoso
(Universita e INFN (IT)),
Vincenzo Spinoso
(Universita e INFN (IT))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Running and monitoring simulations usually involves several different aspects of the entire workflow: the configuration of the job, the site issues, the software deployment at the site, the file catalogue, the transfers of the simulated data. In addition, the final product of the simulation is often the result of several sequential steps. This project tries a different approach to monitoring...
Daniel Hugo Campora Perez
(CERN)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
The LHCb Software Infrastructure is built around a flexible, extensible, single-process, single-threaded framework named Gaudi. One way to optimise the overall usage of a multi-core server, which is used for example in the Online world, is running multiple instances of Gaudi-based applications concurrently. For LHCb, this solution has been shown to work well up to 32 cores and is expected...
Andrea Formica
(CEA/IRFU,Centre d'etude de Saclay Gif-sur-Yvette (FR))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The ATLAS muon alignment system is composed of about 6000 optical sensors for the Barrel muon spectrometer and the same number for the 2 Endcaps wheels.
The system is acquiring data from every sensor continuously , with a whole read-out cycle of about 10 minutes. The read-out chain stores data inside an Oracle DB. These data are used as input from the alignment algorithms (C++ based) in...
Mr
MA Binsong
(IPN Orsay France)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The PANDA (AntiProton ANnihilation at DArmstadt) experiment is one of the key projects at the future Facility for Antiproton and Ion Research (FAIR), which is currently under construction at Darmstadt. This experiment will perform precise studies of antiproton-proton and antiproton-nucleus annihilation reactions. The aim of the rich experimental program is to improve our knowledge of the...
Marco Clemencic
(CERN)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
The nightly build system used so far by LHCb has been implemented as an extension on the system developed by CERN PH/SFT group (as presented at CHEP2010). Although this version has been working for many years, it has several limitations in terms of extensibility, management and ease of use, so that it was decided to develop a new version based on a continuous integration system.
In this...
Mr
Peter Waller
(University of Liverpool (GB))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The focus in many software architectures of the LHC experiments is to deliver a well-designed Event Data Model (EDM). Changes and additions to the stored data are often very expensive, requiring large amounts of CPU time, disk storage and man-power. At the ATLAS experiment, such a reprocessing has only been undertaken once for data taken in 2012.
However, analysts have to develop and apply...
Alessandro De Salvo
(Universita e INFN, Roma I (IT))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
In the Atlas experiment, the calibration of the precision tracking chambers of the
muon detector is very demanding, since the rate of muon tracks required to get a
complete calibration in homogeneous conditions and to feed prompt reconstruction
with fresh constants is very high (several hundreds Hz for 8-10 hours runs). The
calculation of calibration constants is highly CPU consuming. In...
Dr
Salman Toor
(Helsinki Institute of Physics (FI))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments, such as the CERN HEP analyses, requires continuous exploration of new technologies and techniques. In this article we present a hybrid solution of an open source cloud with a network file system for CMS data analysis. Our aim has been to design a scalable and...
Andrea Formica
(CEA/IRFU,Centre d'etude de Saclay Gif-sur-Yvette (FR))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
ATLAS Conditions data include about 2 TB in a relational database and 400 GB of files referenced from the database. Conditions data is entered and retrieved using COOL, the API for accessing data in the LCG Conditions Database infrastructure. It is managed using an ATLAS-customized python based tool set.
Conditions data are required for every reconstruction and simulation job, so access to...
Dmitry Ozerov
(D)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
In a future-proof data preservation scenario, the software and environment employed to produce and analyse high energy physics data needs to be preserved, rather than just the data themselves. A software preservation system will be presented which allows analysis software to be migrated to the latest software versions and technologies for as long as possible, substantially extending the...
Gareth Roy
(U),
Mark Mitchell
(University of Glasgow)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
With the current trend towards "On Demand Computing" in big data environments it becomes crucial that the deployment of services and resources becomes increasingly automated. With opensource projects such as Canonicals MaaS and Redhats Spacewalk; automated deployment is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller,...
Derek John Weitzel
(University of Nebraska (US))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Bosco is a software project developed by the Open Science Grid to help scientists better utilize their on-campus computing resources. Instead of submitting jobs through a dedicated gatekeeper, as most remote submission mechanisms use, it uses the built-in SSH protocol to gain access to the cluster. By using a common access method, SSH, we are able to simplify the interaction with the...
Alexey Anisenkov
(Budker Institute of Nuclear Physics (RU))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
In this paper we describe the ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services.
The Information system centrally defines and exposes the topology of the ATLAS computing infrastructure including...
Ian Collier
(UK Tier1 Centre), Mr
Matthew James Viljoen
(STFC - Science & Technology Facilities Council (GB))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
In this paper we shall introduce the service deployment framework based on Quattor and Microsoft HyperV at the RAL Tier 1. As an example, we will explain how the framework has been applied to CASTOR in our test
infrastructure and outline our plans to roll it out into full production. CASTOR is a relatively complicated open source hierarchical storage management system in production use at...
Qiyan Li
(Goethe University Frankfurt)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
CBM aims to measure open charm particles from 15-40 AGeV/c heavy ion collisions by means of secondary vertex reconstruction. The measurement concept includes the use of a free-running DAQ, real time tracking, primary and secondary vertex reconstruction and a tagging of open charm candidates based on secondary vertex information. The related detector challenge will be adressed with an...
Fabrizio Furano
(CERN)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
In this contribution we present a vision for the use of the HTTP protocol for data management in the context of HEP, and we present demonstrations of the use of HTTP-based protocols for storage access & management, cataloguing, federation and transfer.
The support of HTTP/WebDAV, provided by frameworks for scientific data access like DPM, dCache, STORM, FTS3 and foreseen for XROOTD, can be...
Francesco Giacomini
(INFN CNAF)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The success of a scientific endeavor depends, often significantly, on
the ability to collect and later process large amounts of data in an
efficient and effective way. Despite the enormous technological
progress in areas such as electronics, networking and storage, the
cost of the computing factor remains high. Moreover the limits reached
by some historical directions of hardware...
Carlos Solans Sanchez
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The Tile calorimeter is one of the sub-detectors of ATLAS. In order to ensure its proper operation and assess the quality of data, many tasks are to be performed by means of many tools which were developed independently to satisfy different needs. Thus, these systems are commonly implemented without a global perspective of the detector and lack basic software features. Besides, in some cases...
Maaike Limper
(CERN)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
As part of the CERN Openlab collaboration, an investigation has been made into the use of an SQL-based approach for physics analysis with various up-to-date software and hardware options.
Currently physics analysis is done using data stored in customised root-ntuples that contain only the variables needed for a specific analysis. Production of these ntuples is mainly done by accessing the...
Dr
Giacinto Donvito
(INFN-Bari)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
The italian community in CMS has built a geographically distributed network in which all the data stored in the italian region are available to all the users for their everyday work. This activity involves at different level all the CMS centers: the Tier1 at CNAF, all the four Tier2s (Bari, Rome, Legnaro and Pisa), and few Tier3s (Trieste, Perugia, etc). The federation uses the new network...
Dr
Samuel Cadellin Skipsey
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Of the three most widely used implementations of the WLCG Storage Element specification, Disk Pool Manager (DPM) has the simplest implementation of file placement balancing (StoRM doesn't attempt this, leaving it up to the underlying filesystem, which can be very sophisticated in itself). DPM uses a round-robin algorithm (with optional filesystem weighting), for placing files across...
Shaun De Witt
(STFC - Science & Technology Facilities Council (GB))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
At the RAL Tier 1 we have successfully been running a CASTOR HSM instance for a number of years. While it performs well for disk-only storage for analysis and processing jobs, it is heavily optimised for tape usage. We have been investigating alternative technologies which could be used for online storage for analysis. We present the results of our preliminary selection and test results for...
Dr
Massimiliano Nastasi
(INFN Milano-Bicocca)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
Measurements of radioactive sources, in order to reach an optimum level of accuracy, require an accurate determination of the detection efficiency of the experimental setup. In gamma ray spectroscopy, in particular, the high level of sensitivity reached nowadays implies a correct evaluation of the detection capability of source emitted photons. The standard approach, based
on an analytical...
David Cameron
(University of Oslo (NO))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Grid middleware suites provide tools to perform the basic tasks of job
submission and retrieval and data access, however these tools tend to
be low-level, operating on individual jobs or files and lacking in
higher-level concepts. User communities therefore generally develop
their own application-layer software catering to their specific
communities' needs on top of the Grid middleware....
Dr
Roberto Ammendola
(INFN Roma Tor Vergata)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
Modern Graphics Processing Units (GPUs) are now considered accelerators for general purpose computation. A tight interaction between the GPU and the interconnection network is the strategy to express the full potential on capability computing of a multi-GPU system on large HPC clusters; that is why an efficient and scalable interconnect is a key technology to finally deliver GPUs for...
Dr
Jörg Meyer
(KIT - Karlsruher Institute of Technology)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
After analysis and publication, there is no need to keep experimental data online on spinning disks. For reliability and costs inactive data is moved to tape and put into a data archive. The data archive must provide reliable access for at least ten years following a recommendation of the German Science Foundation (DFG), but many scientific communities wish to keep data available much longer....
Mario Lassnig
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list.
The data collected so far by the experiment adds up to about 115 Peta bytes spread over 270 million...
Jaroslava Schovancova
(Brookhaven National Laboratory (US))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The ATLAS Distributed Computing (ADC) Monitoring targets three groups of customers: ADC Operations, ATLAS Management, and ATLAS sites and ATLAS funding agencies. The main need of ADC Operations is to identify malfunctions early and then escalate issues to an activity or a service expert. The ATLAS Management use visualisation of long-term trends and accounting information about the ATLAS...
Alexey Sedov
(Universitat Autònoma de Barcelona)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
ATLAS Distributed Computing Operation Shifts were evolved to meet new requirements. New monitoring tools as well as new operational changes led to modifications in organization of shifts. In this paper we describe the roles and the impacts of the shifts to smooth operation of complex computing grid employed in ATLAS, the influence of Discovery of Higgs like particle on shift operations, the...
Cedric Serfon
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
The current ATLAS Distributed Data Management system (DQ2) is being replaced by a new one called Rucio. The new system has many improvements, but it requires a number of changes. One of the most significant ones is that no local file catalog like the LFC, which was a central component in DQ2, will be used by Rucio. Instead of querying a file catalogue that stores the association of files with...
Tom Uram
(ANL)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude...
Dr
Alexander Undrus
(Brookhaven National Laboratory (US))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
The ATLAS Nightly Build System is a facility for automatic production of software releases. Being the major component of ATLAS software infrastructure, it supports more than 50 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software, and for migrating to new platforms and compilers. The Nightly System...
Grigori Rybkin
(Universite de Paris-Sud 11 (FR))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
The ATLAS software code base is over 7 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is...
Jason Alexander Smith
(Brookhaven National Laboratory (US))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Public clouds are quickly becoming cheap and easy methods for dynamically adding more computing resources to your local site to help handle peak computing demands. As cloud use continues to grow, the HEP community is looking to run more than just simple homogeneous VM images, which run basic data analysis batch jobs. The growing demand for heterogeneous server configurations demands better...
Jason Alexander Smith
(Brookhaven National Laboratory (US))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
Running a stable production service environment is important in any field. To accomplish this, a proper configuration management system is necessary along with good change management policies. Proper testing and validation is required to protect yourself against software or configuration changes to production services that can cause major disruptions. In this paper, we discuss how we extended...
Dr
Jorge Luis Rodriguez
(UNIVERSITY OF FLORIDA)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
With the explosion of big data in many fields, the efficient
management of knowledge about all aspects of the data analysis gains
in importance. A key feature of collaboration in large scale projects
is keeping a log of what and how is being done - for private use and
reuse and for sharing selected parts with collaborators and peers,
often distributed geographically on an increasingly...
John Hover
(Brookhaven National Laboratory (BNL)-Unknown-Unknown)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
AutoPyFactory (APF) is a next-generation pilot submission framework that has been used as part of the ATLAS workload management system (PanDA) for two years. APF is reliable, scalable, and offers easy and flexible configuration. Using a plugin-based architecture, APF polls for information from configured information and batch systems (including grid sites), decides how many additional pilot...
Ludmila Marian
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The volume of multimedia material produced by CERN is growing rapidly, fed by the increase of dissemination activities carried out by the various outreach teams, such as the central CERN Communication unit and the Experiments Outreach committees. In order for this multimedia content to be stored digitally for the long term, to be made available to end-users in the best possible conditions and...
Ian Peter Collier
(STFC - Science & Technology Facilities Council (GB))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
In the last three years the CernVM Filesystem (CernVM-FS) has transformed the distribution of experiment software to WLCG grid sites. CernVM-FS removes the need for local installations jobs and performant network fileservers at sites, in addition it often improves performance at the same time. Furthermore the use of CernVM-FS standardizes the computing environment across the grid and removes...
Stefano Dal Pra
(Unknown)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
At the Italian Tier1 Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in.
We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its...
Victor Manuel Fernandez Albor
(Universidade de Santiago de Compostela (ES))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system...
Kenneth Bloom
(University of Nebraska (US))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born three years ago at the LPC (LHC Physics Center), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of...
Mr
Igor Sfiligoi
(University of California San Diego)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The CMS experiment at the Large Hadron Collider is relying on the HTCondor-based glideinWMS batch system to handle most of its distributed computing needs. In order to minimize the risk of disruptions due to software and hardware problems, and also to simplify the maintenance procedures, CMS has set up its glideinWMS instance to use most of the attainable High Availability (HA) features. The...
Mrs
Ianna Osborne
(Fermi National Accelerator Lab. (US))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
CMS faces real challenges with upgrade of the CMS detector through 2020. One of the challenges, from the software point of view, is managing upgrade simulations with the same software release as the 2013 scenario. We present the CMS geometry description software model, its integration with the CMS event setup and core software. The CMS geometry configuration and selection is implemented in...
Dr
Tony Wildish
(Princeton University (US))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
During the first LHC run, CMS saturated one hundred petabytes of storage resources with data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups, and further resource planning. We present newly developed CMS space monitoring system based on the storage dumps produced at the sites. Storage...
111.
CMS users data management service integration and first experiences with its NoSQL data storage
Marco Mascheroni
(Universita & INFN, Milano-Bicocca (IT))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job...
Dr
Edward Karavakis
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The ATLAS Experiment at the Large Hadron Collider has been collecting data for three years. The ATLAS data are distributed, processed and analysed at more than 130 grid and cloud sites across the world. The total throughput of transfers is more than 5 GB/s and data occupies more than 120 PB on disk and tape storage. At any given time, there are more than 100,000 concurrent jobs running and...
Moritz Kretz
(Ruprecht-Karls-Universitaet Heidelberg (DE))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
In 2014 the Insertable B-Layer (IBL) will extend the existing Pixel Detector of the ATLAS experiment at CERN by 12 million additional pixels. As with the already existing pixel layers, scanning and tuning procedures need to be employed for the IBL to account for aging effects and guarantee a unified response across the detector. Scanning the threshold or time-over-threshold of a front-end...
Carlos Solans Sanchez
(CERN)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
After two years of operation of the LHC, the ATLAS Tile Calorimeter is undergoing the consolidation process of its front-end electronics. The first layer of certification of the repairs is performed in the experimental area with a portable test-bench which is capable of controlling and reading out all the inputs and outputs of one front-end module through dedicated cables. This testbench has...
Line Everaerts
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
Using the framework of ITIL best practises, the service managers within CERN-IT have engaged into a continuous improvement process, mainly focusing on service operation. This implies an explicit effort to understand and improve all service management aspects in order to increase efficiency and effectiveness. We will present the requirements, how they were addressed and share our experiences....
Mr
Hiroyuki Maeda
(Hiroshima Institute of Technology)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
DAQ-Middleware is a software framework for a network-distributed data acquisition (DAQ) system that is based on the Robot Technology Middleware (RTM). The framework consists of a DAQ-Component and a DAQ-Operator. The basic functionalities such as transferring data, starting and stopping the system, and so on, are already prepared in the DAQ-Components and DAQ-Operator. The DAQ-Component is...
Dr
Andrea Valassi
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
CORAL and COOL are two software packages that are widely used by the LHC experiments for the management of conditions data and other types of data using relational database technologies. They have been developed and maintained within the LCG Persistency Framework, a common project of the CERN IT department with ATLAS, CMS and LHCb. The project used to include the POOL software package,...
Niko Neufeld
(CERN)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
LHCb will have an upgrade of its detector in 2018. After the upgrade, the
LHCb experiment will run at a high luminosity of 2x 10^33 cm^-2 . s^-1. The upgraded detector will be read out at 40 MHz with a highly flexible software-based triggering strategy. The Data Acquisition (DAQ) system of HCb reads out the data fragments from the Front-End Electronics and transports them to the High-Lever...
Ruslan Asfandiyarov
(Universite de Geneve (CH)),
Yordan Ivanov Karadzhov
(Universite de Geneve (CH))
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The Electron-Muon Ranger (EMR) is a totally active scintillator detector which will be installed in the muon beam of the Muon Ionization Cooling Experiment (MICE), the main R&D project for a future neutrino factory. It is designed to measure the properties of a low energy beam composed of muons, electrons and pions, and to perform an identification on a particle by particle basis. The EMR is...
Evan Niner
(Indiana University), Mr
Zukai Wang
(University of Virginia)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The NOvA experiment at Fermi National Accelerator Lab, due to its unique readout and buffering design, is capable of accessing physics beyond the core neutrino oscillations program for which it was built. In particular the experiment is able to search for evidence of relic cosmic magnetic monopoles and for the signs of the neutrino flash from a near by supernova through uses of a specialized...
katarzyna wichmann
(DESY)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
The data preservation project at DESY was established in 2008, shortly after data taking ended at the HERA ep collider, soon after coming under the umbrella of the DPHEP global initiative. All experiments are implementing data preservation schemes to allow long term analysis of their data, in cooperation with the DESY-IT division. These novel schemes include software validation and...
Dr
Bodhitha Jayatilaka
(Fermilab)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF experiment has nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the CDF data is present at Fermilab.
The Fermilab Run II...
Dr
Michael Kirby
(Fermi National Accelerator Laboratory)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
The Tevatron experiments have entered their post-data-taking phases but are still producing physics output at a high rate.
The D0 experiment has initiated efforts to preserve both data access and full analysis capability for the collaboration members through at least 2020. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout...
Mr
Tao Lin
(Institute of High Energy Physics)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Data Transfer is an essential part in grid. In the BESIII experiment, the result of Monte Carlo Simulation should be transfered back from other sites to IHEP and the DST files for physics analysis should be tranfered from IHEP to other sites. A robust transfer system should
make sure all data are transfered correctly.
DIRAC consists of cooperation distributed services and light-weight...
Kai Leffhalm
(Deutsches Elektronen-Synchrotron (DE))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
The dCache storage system writes billing data into flat files or a relational database.
For a midsize dCache installation there are one million entries - representing 300 MByte - per day.
Gathering accounting information for a longer time interval about transfer rates per group, per file type or per user results in increasing load on the servers holding the billing information.
Speeding up...
Gancho Dimitrov
(CERN)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The ATLAS experiment at CERN is one of the four Large Hadron Collider
experiments. The DCS Data Viewer (DDV) is an application that provides
access to historical data of the ATLAS Detector Control System (DCS)
parameters and their corresponding alarm information. It features a
server-client architecture: the pythonic server serves as interface to
the Oracle-based conditions database and...
Andreas Petzold
(KIT)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
GridKa, the German WLCG Tier-1 site hosted by Steinbuch Centre for Computing at Karlsruhe Institute of Thechnology, is a collaboration partner in the HEPIX-IPv6 testbed. A special IPv6-enabled gridftp server has been installed previously. In 2013, the IPv6 efforts will be increased. Already the installation of a new Mini-Grid site has been started. This Mini-Grid installation is planned as a...
Franco Brasolin
(Universita e INFN (IT))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities.
In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs...
Tai Sakuma
(Texas A & M University (US))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
We describe the creation of 3D models of the CMS detector and events using SketchUp, a 3D modelling program. SketchUp provides a Ruby API with which we interface with the CMS Detector Description, the master source of the CMS detector geometry, to create detailed 3D models of the CMS detector. With the Ruby API we also interface with the JSON-based event format used for the iSpy event display...
Sergey Belogurov
(ITEP Institute for Theoretical and Experimental Physics (RU))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
Detector geometry exchange between CAD systems and physical Monte Carlo (MC), packages ROOT and Geant4 is a labor-consuming process necessary for fine design optimization. CAD and MC geometries have completely different structure and hierarchy. For this reason automatic conversion is possible only for very simple shapes.
CATIA-GDML Geometry Builder is a tool which allows to facilitate...
Xavier Espinal Curull
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
After the strategic decision in 2011 to separate tier-0 activity from analysis, CERN-IT developed EOS as a new petascale disk-only solution to address the fast-growing needs for high-performance low-latency data access. EOS currently holds around 22PB usable space for the four big experiment (ALICE, ATLAS, CMS, LHCb), and we expect to grow to >30PB this year. EOS is one of the first production...
Luisa Arrabito
(LUPM Université Montpellier 2, IN2P3/CNRS)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
DIRAC (Distributed Infrastructure with Remote Agent Control) is a general framework for the management of tasks over distributed heterogeneous computing environments. It has been originally developed to support the production activities of the LHCb (Large Hadron Collider Beauty) experiment and today is extensively used by several particle physics and biology communities. Current (Fermi-LAT,...
Dr
Armando Fella
(INFN Pisa), Mr
Bruno Santeramo
(INFN Bari),
Cristian De Santis
(Universita degli Studi di Roma Tor Vergata (IT)), Dr
Giacinto Donvito
(INFN-Bari),
Marcin Jakub Chrzaszcz
(Polish Academy of Sciences (PL)), Mr
Milosz Zdybal
(Institute of Nuclear Physics, Polish Academy of Science),
Rafal Zbigniew Grzymkowski
(P)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
In HEP computing context, R&D studies aiming to the definition of the data and workload models were brought forward by the SuperB community beyond the experiment life itself.
This work is considered of great interest for a generic mid- and small size VO to fulfil Grid exploiting requirements involving CPU-intensive tasks.
We present the R&D line achievements in the design, developments...
Dr
Tony Wong
(Brookhaven National Laboratory)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The RHIC and ATLAS Computing Facility (RACF) at Brookhaven Lab is a dedicated data center serving the needs of the RHIC and US ATLAS community. Since it began
operations in the mid-1990's, it has operated continuously with few unplanned downtimes. In the last 24 months, Brookhaven Lab has been affected by two hurricanes and a record-breaking snowstorm. In
this presentation, we discuss...
Justin Lewis Salmon
(University of the West of England (GB))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The Extended ROOT Daemon (XRootD) is a distributed, scalable system for low-latency clustered data access. XRootD is mature and widely used in HEP, both standalone and as core functionality for the EOS system at CERN, and hence requires extensive testing to ensure general stability. However, there are many difficulties posed by distributed testing, such as cluster initialization,...
Stefano Piano
(INFN (IT))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Since 2003 the computing farm hosted by the INFN T3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. The currently available shared disk space amounts to about 300 TB, while the computing power is provided by 712 cores for a total of 7400 HEP-SPEC06. Given...
Dr
Jorge Luis Rodriguez
(UNIVERSITY OF FLORIDA)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
We have developed remote data access for large volumes of data over the Wide Area Network based on the Lustre filesystem and Kerberos authentication for security. It this paper we explore a prototype for two-step data access from worker nodes at Florida T3 centers, located behind a firewall and using a private network, to data hosted on the Lustre filesystem at the University of Florida CMS T2...
Ian Gable
(University of Victoria (CA))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
It has been shown possible to run HEP workloads on remote IaaS cloud resources. Typically each running Virtual Machine (VM) makes use of the CERN VM Filesystem (CVMFS), a caching HTTP file system, to minimize the size of the VM images, and to simplify software installation. Each VM must be configured with a HTTP web cache, usually a Squid Cache, in proximity in order to function efficiently....
Dr
raul lopes
(School of Design and Engineering - Brunel University, UK)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The performance of hash function computations can impose a significant workload on SSL/TLS authentication servers. In the WLCG this workload shows also in the computation of data transfers checksums. It has been shown in the EGI grid infrastructure that the checksum computation can double the IO load for large file transfers leading to an increase in re-transfers and timeout errors. Storage...
Tomas Kouba
(Acad. of Sciences of the Czech Rep. (CZ))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The production usage of the new IPv6 protocol is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution will present experience with monitoring in HEPiX
distributed IPv6 testbed which includes 11 remote sites. We use Nagios
to check availability of services and Smokeping for...
Mr
Igor Sfiligoi
(University of California San Diego)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The basic premise of pilot systems is to create an overlay scheduling system on top of leased resources. And by definition, leases have a limited lifetime, so any job that is scheduled on such resources must finish before the lease is over, or it will be killed and all the computation wasted. In order to effectively schedule jobs to resources, the pilot system thus requires the expected...
Ian Fisk
(Fermi National Accelerator Lab. (US))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The Fermilab CMS Tier-1 facility provides processing, networking, and storage as one of seven Tier-1 facilities for the CMS experiment. The storage consists of approximately 15 PB of online/nearline disk managed by the dCache file system, and 22 PB of tape managed by the Enstore mass storage system. Data is transferred to and from computing centers worldwide using the CMS-developed PhEDEx...
Guenter Duckeck
(Experimentalphysik-Fakultaet fuer Physik-Ludwig-Maximilians-Uni), Dr
Johannes Ebke
(Ludwig-Maximilians-Univ. Muenchen (DE)),
Sebastian Lehrack
(LMU Munich)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The Apache Hadoop software is a Java based framework for distributed
processing of large data sets across clusters of computers using the
Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform.
Hadoop is primarily designed for processing large textual data sets
which can be processed in arbitrary chunks, and must be adapted to the use case of...
Ian Fisk
(Fermi National Accelerator Lab. (US))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The physics event reconstruction in LHC/CMS is one of the biggest challenges for computing.
Among the different tasks that computing systems perform, the reconstruction takes most of the CPU resources that are available. The reconstruction time of a single event varies according to the event complexity. Measurements were done in order to find precisely this correlation, creating means to...
Ian Fisk
(Fermi National Accelerator Lab. (US))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
CMS production and analysis job submission is based largely on glideinWMS and pilot submissions. The transition from multiple different submission solutions like gLite WMS and HTCondor-based implementations was carried out over years and is coming now to a conclusion. The historically explained separate glideinWMS pools for different types of production jobs and analysis jobs are being unified...
Prof.
Jesus Marco
(IFCA (CSIC-UC) Santander Spain)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
The strategy at the end of the LEP era for the long term
preservation of physics results and data processing framework
was not obvious.
One of the possibilities analyzed at the time, previously to the
generalization of virtualization techniques, was the setup of
a dedicated farm, to be conserved in its original state for
medium-long term, at least until the new data from LHC could...
Wim Lavrijsen
(Lawrence Berkeley National Lab. (US))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
Intel recently released the first commercial boards of its Many Integrated Core (MIC) Architecture. MIC is Intel's solution for the domain of throughput computing, currently dominated by general purpose programming on graphics processors (GPGPU). MIC allows the use of the more familiar x86 programming model and supports standard technologies such as OpenMP, MPI, and Intel's Threading Building...
Laura Sargsyan
(ANSL (Yerevan Physics Institute) (AM))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides...
Boris Wagner
(University of Bergen (NO) for the ALICE Collaboration)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The Nordic Tier-1 for the LHC is distributed over several, sometimes smaller, computing centers. In order to minimize administration effort, we are interested in running different grid jobs over one common grid middleware. ARC is selected as the internal middleware in the Nordic Tier-1. The AliEn grid middleware, used by ALICE has a different design philosophy than ARC. In order to use most of...
Jakub Cerkala
(Technical University of Košice),
Slávka Jadlovská
(Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
ALICE Controls data produced by commercial SCADA system WINCCOA is
stored in ORACLE database on the private experiment network. The SCADA
system allows for basic access and processing of the historical data.
More advanced analysis requires tools like ROOT and needs therefore a
separate access method to the archives.
The present scenario expects that detector experts create simple...
Max Fischer
(KIT - Karlsruhe Institute of Technology (DE))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The CMS collaboration is successfully using glideInWMS for managing grid resources within the WLCG project. The GlideIn mechanism with HTCondor underneath provides a clear separation of responsibilities between administrators operating the service and users utilizing computational resources.
German CMS collaborators (dCMS) have explored modern capabilities of the glideInWMS and aiming at...
Dennis Box
(F)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The Fermilab Intensity Frontier Experiments use an integrated submission system known as FIFE-jobsub, part of the FIFE (Fabric for Frontier Experiments) initiative, to submit batch jobs to the Open Science Grid. FIFE-jobsub eases the burden on experimenters by integrating data transfer and site selection details in an easy to use and well documented format. FIFE-jobsub automates tedious...
Johannes Philipp Grohs
(Technische Universitaet Dresden (DE))
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The readout of the trigger signals of the ATLAS Liquid Argon (LAr) calorimeters is foreseen to be upgraded in order to prepare for operation during the first high-luminosity phase of the Large Hadron Collider (LHC). Signals with improved spatial granularity are planned to be received from the detector by a Digitial Processing System (DPS) in ATCA technology and will be sent in real-time to the...
Dr
Piotr Golonka
(CERN)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
Rapid growth of popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools or
jQuery Sparklines, implemented in JavaScript and running inside a web browser. In the paper we describe the tool that allows for
seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex...
Laurent Garnier
(LAL-IN2P3-CNRS)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
Geant4 application in a web browser
Geant4 is a toolkit for the simulation of the passage of particles through matter. The Geant4 visualization system supports many drivers including OpenGL, OpenInventor, HepRep, DAWN, VRML, RayTracer, gMocren and ASCIITree, with diverse and complementary functionalities.
Web applications have an increasing role in our work, and thanks to emerging...
Dr
Thomas Kittelmann
(European Spallation Source ESS AB)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The construction of the European Spallation Source ESS AB, which will become the worlds most powerful source of cold and thermal neutrons (meV scale), is about to begin in Lund, Sweden, breaking ground in 2014 and coming online towards the end of the decade. Currently 22 neutron-scattering instruments are planned as the baseline suite at the facility, and a crucial part of each such beam-line...
Prof.
Vladimir Ivantchenko
(CERN)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
Electromagnetic physics sub-package of the Geant4 Monte Carlo toolkit is an important component of LHC experiment simulation and other Geant4 applications. In this work we present recent progress in Geant4 electromagnetic physics modeling, with an emphasis on the new refinements for the processes of multiple and single scattering, ionisation, high energy muon interactions, and gamma induced...
Aurelie Pascal
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
CERN has recently renewed its obsolete VHF firemen’s radio network and replaced it by a digital one based on TETRA technology. TETRA already integrates an outdoor GPS localization system, but it appeared essential to look for a solution to also locate TETRA users in CERN’s underground facilities.
The system which answers this problematic and which has demonstrated a good resistance to...
Oliver Keeble
(CERN)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The GLUE 2 information schema is now fully supported in the production EGI/WLCG information system. However, to make the schema usable and allow clients to rely on the information it is important that the meaning of the published information is clearly defined, and that information providers and site configurations are validated to ensure as far as possible that what they publish is correct....
Dr
Yaodong CHENG
(Institute of High Energy Physics,Chinese Academy of Sciences)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Gluster file system adopts no metadata architecture, which theoretically eliminates both a central point of failure and a performance bottleneck of metadata server. Firstly, this talk will introduce gluster compared to lustre or hadoop. However, its some mechanisms are not so good in current version. For example, it has to read the extend attributes of all bricks to locate one file. And it is...
Dr
Sebastien Binet
(IN2P3/LAL)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
Current HENP libraries and frameworks were written before multicore
systems became widely deployed and used.
From this environment, a 'single-thread' processing model naturally
emerged but the implicit assumptions it encouraged are greatly
impairing our abilities to scale in a multicore/manycore world.
Thanks to C++11, C++ is finally slowly catching up with regard to
concurrency...
Halyo Valerie
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
Significant new challenges are continuously confronting the High Energy Physics (HEP) experiments in particular the Large Hadron Collider (LHC) at CERN who does not only drive forward theoretical, experimental and detector physics but also pushes to limits computing. LHC delivers proton-proton collisions to the detectors at a rate of 40 MHz. This rate must be significantly reduced to comply...
Roberto Ammendola
(INFN)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
We describe a pilot project for the use of GPUs (Graphics processing units) in online triggering applications for high energy physics experiments. Two major trends can be identified in the development of trigger and DAQ systems for particle physics experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of...
Michelle Perry
(Florida State University)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The search for new physics has typically been guided by theoretical models with relatively few parameters. However, recently, more general models, such as the 19-parameter phenomenological minimal supersymmetric standard model (pMSSM), have been used to interpret data at the Large Hadron Collider. Unfortunately, due to the complexity of the calculations, the predictions of these models are...
Derek John Weitzel
(University of Nebraska (US))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
During the last decade, large-scale federated distributed infrastructures have continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users,...
Johannes Elmsheuser
(Ludwig-Maximilians-Univ. Muenchen (DE))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to...
Dr
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service...
Maria Dimou
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
In the Wordwide LHC Computing Grid (WLCG) project the Tier centres are of paramount importance for storing and accessing experiment data and for running the batch jobs necessary for experiment production activities.
Although Tier2 sites provide a significant fraction of the resources a non-availability of resources at the Tier0 or the Tier1s can seriously harm not only WLCG Operations but...
Steven Goldfarb
(University of Michigan (US))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
On July 4, 2012, particle physics became a celebrity. Around 1,000,000,000 people (yes, 1 billion) saw rebroadcasts of two technical presentations announcing discovery of a new boson. The occasion was a joint seminar of the CMS and ATLAS collaborations, and the target audience were members of those collaborations plus interested experts in the field of particle physics. Yet, the world ate it...
Ramon Medrano Llamas
(CERN)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The recent paradigm shift toward cloud computing in IT, and general interest in "Big Data" in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R&D investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of...
Wahid Bhimji
(University of Edinburgh (GB))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
“Big Data” is no longer merely a buzzword, but is business-as-usual in the private sector. High Energy Particle Physics is often cited as the archetypal Big Data use case, however it currently shares very little of the toolkit used in the private sector or other scientific communities.
We present the initial phase of a programme of work designed to bridge this technology divide by both...
Alex Mann
(Ludwig-Maximilians-Univ. Muenchen (DE)),
Alexander Mann
(Ludwig-Maximilians-Universität)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The ATLAS detector operated during the three years of the run 1 of the Large
Hadron Collider collecting information on a large number of proton-proton events.
One the most important results obtained so far is the discovery of one Higgs
boson. More precise measurements of this particle must be performed as well as
there are other very important physics topics still to be explored. One of...
Stefan Kluth
(Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut) (D)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
We benchmarked an ARM Cortex A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the hepspec 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the...
Andre Sailer
(CERN),
Christian Grefe
(CERN),
Stephane Guillaume Poss
(Centre National de la Recherche Scientifique (FR))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
ILCDIRAC was initially developed in the context of the CLIC Conceptual
Design Report (CDR), published in 2012-2013. It provides a convenient interface for the mass production of the simulation events needed for the physics performance studies of the two detectors concepts considered, ILD and SID. It was since used in the ILC Detailed Baseline Detector (DBD) studies of the SID detector...
Dr
Alexei Strelchenko
(FNAL)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
Lattice Quantum Chromodynamics (LQCD) simulations are critical for understanding the validity of the Standard Model and the results of the High-Energy and Nuclear Physics experiments. Major improvements in the calculation and prediction of physical observables, such as nucleon form factors or flavor singlet meson mass, require large amounts of computer resources, of the order of hundreds of...
Kati Lassila-Perini
(Helsinki Institute of Physics (FI))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Implementation of the CMS policy on long-term data preservation, re-use and open access has started. Current practices in providing data additional to published papers and distributing simplified data-samples for outreach are promoted and consolidated. The first measures have been taken for the analysis and data preservation for the internal use of the collaboration and for the open access to...
Enrico Mazzoni
(INFN-Pisa)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 5000 production cores, permits the use of
modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat)...
Donato De Girolamo
(INFN CNAF), Mr
Lorenzo Chiarelli
(INFN CNAF), Mr
Stefano Zani
(INFN CNAF)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The computing models of HEP experiments, starting from the LHC ones, are facing an evolution with the relaxation of the data locality paradigm: the possibility of a job accessing data files over the WAN is becoming more and more common.
One of the key factors for the success of this change is the ability
to use the network in the most efficient way: in the best scenario,
the network...
Andrew Malone Melo
(Vanderbilt University (US))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The LHC experiments have always depended upon a ubiquitous, highly-performing network infrastructure to enable their global scientific efforts. While the experiments were developing their software and physical infrastructures, parallel development work was occurring in the networking communities responsible for interconnecting LHC sites. During the LHC's Long Shutdown \#1 (LS1) we have an...
Dr
Tony Wildish
(Princeton University (US))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle...
Ivana Hrivnacova
(Universite de Paris-Sud 11 (FR))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
g4tools, that is originally part of the inlib and exlib packages [1], provides a very light and easy to install set of C++ classes that can be used to perform analysis in a Geant4 batch program. It allows to create and manipulate histograms and ntuples, and write them in supported file formats (ROOT, AIDA XML, CSV and HBOOK).
It is integrated in Geant4 through analysis manager classes,...
Dmitry Nilsen
(Karlsruhe Institute of Technology), Dr
Pavel Weber
(KIT - Karlsruhe Institute of Technology (DE))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The complexity of the heterogeneous computing resources, services and recurring infrastructure changes at the GridKa WLCG Tier-1 computing center require a structured approach to configuration management and optimization of interplay between functional components of the whole system. A set of tools deployed at GridKa, including Puppet, Redmine, Foreman, SVN and Icinga, provides the...
Dr
Andreas Gellrich
(DESY)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The vast majority of jobs in the Grid are embarrassingly parallel. In
particular HEP tasks are divided into atomic jobs without need for
communication between them. Jobs are still neither multi-threaded nor
multi-core capable. On the other hand, resource requirements reach
from CPU-dominated Monte Carlo jobs to network intense analysis jobs.
The main objective of any Grid site is to...
Vidmantas Zemleris
(Vilnius University (LT))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Background: The goal of the virtual data service integration is to provide a coherent interface for querying a number of heterogenous data sources (e.g., web services, web forms, proprietary systems, etc.) in cases where accurate results are necessary. This work explores various aspects of its usability.
Problem: Querying is usually carried out through a structured query language, such as...
Victoria Sanchez Martinez
(Instituto de Fisica Corpuscular (IFIC) UV-CSIC (ES))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
In this contribution we expose the performance of the Iberian (Spain and Portugal) ATLAS cloud during the first LHC running period (March 2010-January 2013) in the framework of the GRID Computing and Data Model. The evolution of the resources for CPU, disk and tape in the Iberian Tier1 and Tier2s is summarized. The data distribution over all ATLAS destinations is shown, focusing in the number...
Andrew John Washbrook
(University of Edinburgh (GB))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
High Performance Computing (HPC) provides unprecedented computing power for a diverse range of scientific applications. As of November 2012, over 20 supercomputers deliver petaflop peak performance with the expectation of "exascale" technologies available in the next 5 years. Despite the sizeable computing resources on offer there are a number of technical barriers that limit the use of HPC...
Eygene Ryabinkin
(National Research Centre Kurchatov Institute (RU))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The review of the distributed grid computing infrastructure for LHC experiments in Russia is given. The emphasis is placed on the Tier-1 site construction at the National Research Centre "Kurchatov Institute" (Moscow) and the Joint Institute for Nuclear Research (Dubna).
In accordance with the protocol between CERN, Russia and the Joint Institute for Nuclear Research (JINR) on participation...
Luca dell'Agnello
(INFN-CNAF)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Long-term preservation of experimental data (intended as both raw and derived formats) is one of the emerging requirements coming from scientific collaborations. Within the High Energy Physics community the Data Preservation in High Energy Physics (DPHEP) group coordinates this effort.
CNAF is not only one of the Tier-1s for the LHC experiments, it is also a computing center providing...
Shaun De Witt
(STFC - Science & Technology Facilities Council (GB))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
WLCG is moving towards greater use of xrootd. While this will in general optimise resource usage on the grid, it can create load problems at sites when storage elements are unavailable. We present some possible methods of mitigating these problems and the results from experiments at STFC
Andrew John Washbrook
(University of Edinburgh (GB))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
A number of High Energy Physics experiments have successfully run feasibility studies to demonstrate that many-core devices such as GPGPUs can be used to accelerate algorithms for trigger systems and data analysis. After this exploration phase experiments on the Large Hadron Collider are now investigating how these devices can be incorporated into key areas of their software framework in...
Mr
Stephen Lloyd
(University of Edinburgh)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
The Matrix Element Method has been used with great success in the past several years, notably for the high precision top quark mass determination, and subsequently the single top quark discovery, at the Tevatron. Unfortunately, the Matrix Element method is notoriously CPU intensive due to the complex integration performed over the full phase space of the final state particles arising from...
DIMITRIOS ZILASKOS
(STFC)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The WLCG uses HEP-SPEC as its benchmark for measuring CPU performance. This provides a consistent and repeatable CPU benchmark to describe experiment requirements, lab commitments and existing resources. However while HEP-SPEC has been customized to represents WLCG applications it is not a perfect measure.
The Rutherford Appleton Laboratory (RAL), is the UK Tier 1 site and provides CPU and...
Dr
Jean-Roch Vlimant
(CERN)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The analysis of the LHC data at the CMS experiment requires the production of a large number of simulated events. In 2012, CMS has produced over 4 Billion simulated events in about 100 thousands of datasets. Over the past years a tool (PREP) has been developed for managing such a production of thousands of samples.
A lot of experience working with this tool has been gained, and conclusions...
Dr
Janusz Martyniak
(Imperial College London)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino Factory or Muon Collider. The experiment is currently under construction at the ISIS synchrotron at the Rutherford-Appleton Laboratory, UK.
The configuration/condition of the experiment during each run is...
Yordan Ivanov Karadzhov
(Universite de Geneve (CH))
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The Muon Ionization Cooling Experiment (MICE) is under development at the Rutherford Appleton Laboratory (UK). The goal of the experiment is to build a section of a cooling channel that can demonstrate the principle of ionization cooling and to verify its performance in a muon beam. The final setup of the experiment will be able to measure a 10% reduction in emittance (transverse phase space...
Dr
Patricia Mendez Lorenzo
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The large potential and flexibility of the ServiceNow infrastructure based on "best practices" methods is allowing the migration of some of the ticketing systems traditionally used for the tracing of the servers and services available at the CERN IT Computer Center. This migration enables a standardization and globalization of the ticketing and control systems implementing a generic system...
Mark Mitchell
(University of Glasgow)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The monitoring of a grid cluster (or of any piece of reasonably scaled IT infrastructure) is a key element in the robust and consistent running of that site. There are several factors which are important to the selection of a useful monitoring framework, which include ease of use, reliability, data input and output. It is critical that data can be drawn from different instrumentation packages...
Alexandre Beche
(CERN)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The computing models of the LHC experiments are gradually moving from hierarchical data models with centrally managed data pre-placement towards federated storage which provides seamless access to data files independently of their location and dramatically improved recovery due to fail-over mechanisms. Enabling loosely coupled data clusters to act as a single storage resource should increase...
Bogdan Lobodzinski
(DESY, Hamburg, Germany)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Small Virtual Organizations (VO) employ all components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers...
Georg Weidenspointner
(MPE Garching)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
An extensively documented, quantitative study of software evolution resulting in deterioration of physical accuracy over the years is presented. The analysis concerns the energy deposited by electrons in various materials produced by Geant4 versions released between 2007 and 2013.
The evolution of the functional quality of the software is objectively quantified by means of a rigorous...
Dr
Maria Grazia Pia
(Universita e INFN (IT))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
A large-scale project is in progress, which validates the basic constituents of the electromagnetic physics models implemented in major Monte Carlo codes (EGS, FLUKA, Geant4, ITS, MCNP, Penelope) against extensive collections of experimental data documented in the literature. These models are responsible for the physics observables and the signal generated in particle detectors, including...
Ian Gable
(University of Victoria (CA))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
We review the demonstration of next generation high performance 100 Gbps networks for HEP that took place at the Supercomputing 2012 (SC12) conference in Salt Lake City. Three 100 Gbps circuits were established from the California Institute of Technology, the University of Victoria and the University of Michigan to the conference show floor. We were able to to efficiently utilize these...
Paul Nilsson
(University of Texas at Arlington (US))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The Production and Distributed Analysis system (PanDA) has been in use in the ATLAS Experiment since 2005. It uses a sophisticated pilot system to execute submitted jobs on the worker nodes. While originally designed for ATLAS, the PanDA Pilot has recently been refactored to facilitate use outside of ATLAS. Experiments are now handled as plug-ins, and a new PanDA Pilot user only has to...
Dr
Peter Van Gemmeren
(Argonne National Laboratory (US))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
The ATLAS event store employs a persistence framework with extensive navigational capabilities. These include real-time back navigation to upstream processing stages, externalizable data object references, navigation from any data object to any other both within a single file and across files, and more. The 2013-2014 shutdown of the Large Hadron Collider provides an opportunity to enhance...
Anastasia Karavdina
(University Mainz)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
Precise luminosity determination is crucial for absolute cross-section measurements and scanning experiments with the fixed target PANDA experiment at the planned antiproton accelerator HESR (FAIR, Germany). For the determination of the luminosity we will exploit the elastic antiproton-proton scattering. Unfortunately there are no or only a few data with large uncertainties available in the...
Christopher John Walker
(University of London (GB))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The WLCG, and high energy physics in general, relies on remote Tier-2
sites to analyse the large quantities of data produced. Transferring
this data in a timely manner requires significant tuning to make
optimum usage of expensive WAN links.
In this paper we describe the techniques we have used at QMUL to
optimise network transfers. Use of the FTS with settings and
appropriate TCP...
Zoltan Mathe
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
The LHCb experiment produces a huge amount of data which has associated metadata such as run number, data taking condition (detector status when the data was taken), simulation condition, etc. The data are stored in files, replicated on the Computing Grid around the world. The LHCb Bookkeeping System provides methods for retrieving datasets based on their metadata. The metadata is stored in a...
Dr
Giacinto Donvito
(INFN-Bari),
Tommaso Boccali
(Sezione di Pisa (IT))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The Italian Ministry of Research (MIUR) funded in the past years research projects aimed to an optimization of the analysis activities in the Italian CMS computing Centers. A new grant started in 2013, and activities are already ongoing in 9 INFN sites, all hosting local CMS groups. Main focus will be on the creation of an italian storage federation (via Xrootd initially, and later HTTP) which...
Egor Ovcharenko
(ITEP Institute for Theoretical and Experimental Physics (RU))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
One of the current problems in HEP computing is the development of particle propagation algorithms capable of efficient work at parallel architectures. An interesting approach in this direction has been recently introduced by the GEANT5 group at CERN [1]. Our report will be devoted to realization of similar functionality using Intel Threading Building Blocks (TBB) library.
In the prototype...
Stewart Martin-Haugh
(University of Sussex (GB))
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
We present a description of the algorithms and the performance of the ATLAS Inner Detector trigger for LHC run I, as well as prospects for a redesign of the tracking algorithms in run 2. The Inner Detector trigger algorithms are vital for many trigger signatures at ATLAS. The performance of the algorithms for muons, electrons, taus and b-jets is presented.
The ATLAS trigger software after...
Enrico Bonaccorsi
(CERN),
Francesco Sborzacchi
(Istituto Nazionale Fisica Nucleare (IT)),
Niko Neufeld
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The virtual computing is often run to satisfy different needs: reduce
costs, reduce resources, simplify maintenance and the last but not the
least add flexibility.
The use of Virtualization in a complex system such as a farm of PCs that
control the hardware of an experiment (PLC, power supplies ,gas,
magnets..) put as in a condition where not only an High Performance
requirements...
Eduardo Bach
(UNESP - Universidade Estadual Paulista (BR))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
Distributed storage systems have evolved from providing a simple means to store data remotely to offering advanced services like system federation and replica management. This evolution have made possible due to the advancement of the underlying communication technology, that plays a vital role in determining the communication efficiency of the distributed systems. The dCache system, which has...
Dr
Dmytro Kovalskyi
(Univ. of California Santa Barbara (US))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Databases are used in many software components of the HEP computing, from monitoring and task scheduling to data storage and processing. While the database design choices have a major impact on the system performance, some solutions give better results out of the box than the others. This paper presents detailed comparison benchmarks of the most popular Open Source systems for a typical class...
Christophe Haen
(Univ. Blaise Pascal Clermont-Fe. II (FR))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The backbone of the LHCb experiment is the Online system, which is a very large and heterogeneous computing center. Making sure of the proper behavior of the many different tasks running on the more than 2000 servers represents a huge workload for the small expert-operator team and is a 24/7 task. At the occasion of CHEP 2012, we presented a prototype of a framework that we designed in order...
Dr
Dirk Hoffmann
(Centre de Physique des Particules de Marseille, CNRS/IN2P3)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
PLUME - FEATHER is a non-profit project created to Promote economicaL, Useful and Maintained softwarE For the Higher Education And THE Research communities. The site references software, mainly Free/Libre Open Source Software (FLOSS) from French universities and national research organisations, (CNRS, INRA...), laboratories or departments. Plume means feather in French. The main goals of PLUME...
Graeme Andrew Stewart
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
This paper describes a popularity prediction tool for data-intensive data management systems, such as the ATLAS distributed data management (DDM) system. The tool is fed by the DDM popularity system, which produces historical reports about ATLAS data usage and provides information about the files, datasets, users and sites where data was accessed. The tool described in this contribution uses...
Nathalie Rauschmayr
(CERN)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
Due to the continuously increasing number of cores on modern CPUs, it is important to adapt HEP applications. This must be done at different levels: the software which must support parallelization and the scheduling has to differ between multicore and singlecore jobs. The LHCb software framework (GAUDI) provides a parallel prototype (GaudiMP), based on the multiprocessing approach. It allows a...
Simone Coscetti
(Sezione di Pisa (IT))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The ALEPH Collaboration took data at the LEP (CERN) electron-positron collider in the period 1989-2000, producing more than 300 scientific papers. While most of the Collaboration activities stopped in the last years, the data collected still has physics potential, with new theoretical models emerging, and needing a check with data at the Z and WW production energies. An attempt to revive and...
Dr
Dirk Hoffmann
(Centre de Physique des Particules de Marseille, CNRS/IN2P3)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
We are developing the prototype of a high speed data acquisition (DAQ) system for the Cherenkov Telescope Array. This experiment will be the next generation ground-based gamma-ray instrument. It will be made up of approximately 100 telescopes of at least three different sizes, from 6 to 24 meters in diameter.
Each camera equipping the telescopes is composed of hundreds of light detecting...
Semen Lebedev
(Justus-Liebig-Universitaet Giessen (DE))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
The software framework of the CBM experiment at FAIR - CBMROOT - has been continuously growing over the years. The increasing complexity of the framework and number of users require improvements in maintenance, reliability and in overall software development process. In this report we address the problem of the software quality assurance (QA) and testing. Two main problems are considered in...
Dr
Armando Fella
(INFN Pisa), Mr
Domenico Diacono
(INFN Bari), Dr
Giacinto Donvito
(INFN-Bari), Mr
Giovanni Marzulli
(GARR),
Paolo Franchini
(Universita e INFN (IT)), Dr
Silvio Pardi
(INFN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
In HEP computing context, R&D studies aiming to the definition of the data and workload models were brought forward by the SuperB community beyond the experiment life itself. This work is considered of great interest for a generic mid- and small size VO during its Computing Model definition phase.
Data-model R&D work we are presenting, starts with the general design
description of the...
Dr
Tony Wildish
(Princeton University (US))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
PhEDEx. the data-placement tool used by the CMS experiment at the LHC, was conceived in a more trusting time. The security model was designed to provide a safe working environment for site agents and operators, but provided little more protection than that. CMS data was not sufficiently protected against accidental loss caused by operator error or software bugs or from loss of data caused by...
Adrian Buzatu
(University of Glasgow (GB))
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
In high-‐energy physics experiments, online selection is crucial to reject most uninteresting collisions and to focus on interesting physical signals.
The b-‐jet selection is part of the trigger strategy of the ATLAS experiment and is meant to select hadronic final states with heavy-‐flavor content. This is important for the selection of physics channels with more than one b-‐jet in the...
Witold Pokorski
(CERN)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
In this paper we present the recent developments in the Geant4 hadronic
framework, as well as in some of the existing physics models.
Geant4 is the main simulation toolkit used by the LHC experiments and
therefore a lot of effort is put into improving the physics mod
els in order for them to have more predictive power. As a consequence, the code complexity increases, which requires...
Christian Veelken
(Ecole Polytechnique (FR))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
An algorithm for reconstruction of the Higgs mass in $H \rightarrow \tau\tau$ decays is presented. The algorithm computes for each event a likelihood function $P(M_{\tau\tau})$ which quantifies the level of compatibility of a Higgs mass hypothesis $M_{\tau\tau}$, given the measured momenta of visible tau decay products plus missing transverse energy reconstructed in the event. The algorithm is...
Mr
Igor Mandrichenko
(Fermilab)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
RESTful web services are popular solution for distributed data access and information management. Performance, scalability and reliability of such services is critical for the success of data production and analysis in High Energy Physics as well as other areas of science.
At FNAL, we have been successfully using REST HTTP-based data access architecture to provide access to various types...
Dr
Tony Wildish
(Princeton University (US))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
PhEDEx has been serving CMS community since 2004 as the data broker. Every PhEDEx operation is initiated by a request, such as request to move data, request to delete data, and so on. A request has it own life cycle, including creation, approval, notification, and book keeping and the details depend on its type. Currently, only two kinds of requests, transfer and deletion, are fully integrated...
Bertrand Bellenot
(CERN)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
In order to be able to browse (inspect) ROOT files in a platform independent way, a JavaScript version of the ROOT I/O subsystem has been developed. This allows the content of ROOT files to be displayed in most available web browsers, without having to install ROOT or any other software on the server or on the client. This gives a direct access to ROOT files from any new device in a light way....
Bertrand Bellenot
(CERN)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
In my poster I'll present a new graphical back-end for ROOT that has been developed for the Mac OS X operating system as an alternative to the more than 15 year-old X11-based version. It represents a complete implementation of ROOT's GUI, 2D and 3D graphics based on Apple's native APIs/frameworks, written in Objective-C++.
Daniela Remenska
(NIKHEF (NL))
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
A big challenge in concurrent software development is early discovery of design errors which can lead to deadlocks or race-conditions. Traditional testing does not always expose such problems in complex distributed applications. Performing more rigorous formal analysis, like model-checking, typically requires a model which is an abstraction of the system. For object-oriented software, UML is...
Mr
Igor Mandrichenko
(Fermilab)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
Over several years, we have developed a number of collaborative tools used by groups and collaborations at FNAL, which is becoming a Suite of Scientific Collaborative Tools. Currently, the suite includes:
- Electronic Logbook (ECL),
- Shift Scheduler,
- Speakers Bureau and
- Members Database.
These product organize and help run the collaboration at every stage of its life...
Jakob Blomer
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet....
Federico Stagni
(CERN),
Mario Ubeda Garcia
(CERN)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Within this paper we present an autonomic Computing resources management system used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG and Cloud Computing resources will soon provide a non-negligible fraction of their computing power.
The lack of...
Giovanni Zurzolo
(Universita e INFN (IT))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
Artificial Neural Networks (ANN) are widely used in High Energy Physics, in particular as software for data analysis. In the ATLAS experiment that collects proton-proton and heavy ion collision data at the Large Hadron Collider, ANN are mostly applied to make a quantitative judgment on the class membership of an event, using a number of variables that are supposed to discriminate between...
Mr
Ajay Kumar
(Indian Institute of Technology Indore)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
Ajay Kumar and Ankhi Roy
For the PANDA collaboration
Indian Institute of Technology Indore, Indore-4520017, India
Email- ajayk@iiti.ac.in
The PANDA experiment is one of the main experiments at the future accelerator facility FAIR which is currently under construction in Darmstadt, Germany. Experiments will be performed with intense, phase space cooled antiproton beams incident on a...
Dr
Guy Barrand
(Universite de Paris-Sud 11 (FR))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
Softinex names a software environment targeted to data analysis and visualization. It covers the C++ inlib and exlib "header only" libraries that permit, through GL-ES and a maximum of common code, to build applications deliverable on the AppleStore (iOS), GooglePlay (Android), traditional laptops/desktops under MacOSX, Linux and Windows, but also deliverable as a web service able to display...
Dr
Alexander Moibenko
(Fermi NAtiona Accelerator Laboratoy)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Enstore is a tape based Mass Storage System originally designed for Run II Tevatron experiments at FNAL (CDF, D0). Over the years it has proven to be reliable and scalable data archival and delivery solution, which meets diverse requirements of variety of applications including US CMS Tier 1, High Performance Computing, Intensity Frontier experiments as well as data backups. Data intensive...
Dr
Simon Patton
(LAWRENCE BERKELEY NATIONAL LABORATORY)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The SPADE application was first used by the IceCube experiment to move its data files from the South Pole to Wisconsin. Since then is has been adapted by the DayaBay experiment to move its data files from its experiment, just outside Hong Kong, to both Beijing an LBNL. The aim of this software is to automate much of the data movement and warehousing that is often done by hand or home-grown...
Alastair Dewhurst
(STFC - Science & Technology Facilities Council (GB))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
During the early running of the LHC , multiple collaborations began to include Squid caches in their distributed computing models. The two main use cases are: for remotely accessing conditions data via Frontier, which is used by ATLAS and CMS; and serving collaboration software via CVMFS, which is used by ATLAS, CMS, and LHCb, and is gaining traction with some non-LHC collaborations. As a...
Witold Pokorski
(CERN)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The LCG Generator Services project provides validated, LCG compliant Monte Carlo generators code for both the theoretical and experimental communities at the LHC. It collaborates with the generators authors, as well as the experiments software developers and the experimental physicists.
In this paper we present the recent developments and the future plans of the project. We start with...
Benedikt Hegner
(CERN)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
For more than ten years, the LCG Savannah portal has successfully served the LHC community to track issues in their software development cycles. In total, more than 8000 users and 400 projects use this portal. Despite its success, the underlying infrastructure that is based on the open-source project "Savane" did not keep up with the general evolution of web technologies and the increasing...
Dr
Xavier Espinal Curull
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
This contribution describes the evolution of the main CERN storage system, CASTOR, as it manages the bulk data stream of the LHC and other CERN experiments, achieving nearly 100 PB of stored data by the end of LHC Run 1.
Over the course of 2012 the CASTOR service has addressed the Tier-0 data management requirements, focusing on a tape-backed archive solution, ensuring smooth operations of...
Christopher Tunnell
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
In the coming years, Xenon 1 T, a ten-fold expansion of Xenon 100, will further explore the dark matter WIMP parameter space and must be able to cope with correspondingly higher data rates. With a focus on sustainable software architecture, and a unique experimental scale compared to collider experiments, a high-level trigger system is being designed for the next many years of Xenon 1 T...
Oliver Keeble
(CERN)
10/14/13, 3:00 PM
Software Engineering, Parallelism & Multi-Core
Poster presentation
In the recent years, with the end of the EU Grid projects such as EGEE and EMI in sight, the management of software development, packaging and distribution has moved from a centrally organised approach to a collaborative one, across several development teams. While selecting their tools and technologies, the different teams and services have gone through several trends and fashion of product...
Dr
Catherine Biscarat
(LPSC/IN2P3/CNRS France)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
We describe the synergy between CIMENT (a regional multidisciplinary HPC centre) and the infrastructures used for the analysis of data recorded by the ATLAS experiment at the LHC collider and the D0 experiment at the Tevatron.
CIMENT is the High Performance Computing (HPC) centre developed by Grenoble University. It is a federation of several scientific departments and it is based on the...
Daniele Francesco Kruse
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Disk access and tape migrations compete for network bandwidth in CASTOR’s disk servers, over various protocols: RFIO, Xroot, root and GridFTP. As there are a limited number of tape drives, it is important be keep them busy all the time, at their nominal speed. With potentially 100s of user read streams per server, the bandwidth for the tape migrations has to be guaranteed to a controlled...
Thomas Lindner
(T)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
ND280 is the off-axis near detector for the T2K neutrino experiment. ND280 is a sophisticated, multiple sub-system detector designed to characterize the T2K neutrino beam and measure neutrino cross-sections. We have developed a complicated system for processing and simulating the ND280 data, using computing resources from North America, Europe and Japan. The first key challenge has been...
michele pezzi
(Infn-cnaf)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
In large computing centers, such as the INFN-CNAF Tier1, is essential to be able to set all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor, a server provisioning tool, which is currently used in production.
Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation...
Robert Fay
(University of Liverpool)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
A key aspect of ensuring optimum cluster reliability and productivity lies in keeping worker nodes in a healthy state. Testnodes is a lightweight node testing solution developed at Liverpool. While Nagios has been used locally for general monitoring of hosts and services, Testnodes is optimised to answer one question: is there any reason this node should not be accepting jobs? This tight focus...
Jason Webb
(Brookhaven National Lab)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT3 simulation application and our ROOT/TGeo based...
Adriana Telesca
(CERN)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE DAQ (Data Acquisition System) is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches). The DAQ reads the data transferred from the...
Mr
Barthelemy Von Haller
(CERN)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
ALICE (A Large Ion Collider Experiment) is a detector designed to study the physics of strongly interacting matter and the quark-gluon plasma produced in heavy-ion collisions at the CERN Large Hadron Collider (LHC). Due to the complexity of ALICE in terms of number of detectors and performance requirements, Data Quality Monitoring (DQM) plays an essential role in providing an online feedback...
Dr
Dario Barberis
(Università e INFN Genova (IT))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Modern scientific experiments collect vast amounts of data that must be cataloged to meet multiple use cases and search criteria. In particular, high-energy physics experiments currently in operation produce several billion events per year. A database with the references to the files including each event in every stage of processing is necessary in order to retrieve the selected events from...
Martin Woudstra
(University of Manchester (GB))
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
CERN’s Large Hadron Collider (LHC) is the highest energy proton-proton collider, providing also the highest instantaneous luminosity as a hadron collider. Bunch crossings occurred every 50 ns in 2012 runs. Amongst of which the online event selection system should reduce the event recording rate down to a few 100 Hz, while events are in a harsh condition with many overlapping proton-proton...
Rafal Zbigniew Grzymkowski
(P)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
In the multidisciplinary institutes the traditional way of computations is highly ineffective. A computer cluster dedicated to a single research group is typically exploited at a rather low level. The private cloud model enables various groups to share computing resources. It can boost the efficiency of the infrastructure usage by a large factor and at the same time reduce maintenance costs....
Dr
Federico De Guio
(CERN)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The Data Quality Monitoring (DQM) Software proved to be a central tool in the CMS experiment. Its flexibility allowed its integration in several environments: Online, for real-time detector monitoring; Offline, for the final, fine-grained Data Certification; Release-Validation, to constantly validate our reconstruction software; in Monte Carlo productions. The central tool to deliver Data...
Shima Shimizu
(Kobe University (JP))
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The ATLAS jet trigger is an important element of the event selection process,
providing data samples for studies of Standard Model physics and searches for new
physics at the LHC. The ATLAS jet trigger system has undergone substantial
modifications over the past few years of LHC operations, as experience developed
with triggering in a high luminosity and high event pileup environment. In...
Mei YE
(IHEP)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The Daya Bay reactor neutrino experiment is designed to determine precisely the neutrino mixing angle θ13 with the sensitivity better than 0.01 in the parameter sin22θ13 at the 90% confidence level. To achieve this goal, the collaboration has built eight functionally identical antineutrino detectors. The detectors are immersed in water pools that provide active and passive shielding against...
Mario Lassnig
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Rucio is the next-generation data management system supporting ATLAS physics workflows in the coming decade. Historically, clients interacted with the data management system via specialised tools, but in Rucio additional methods are provided. To support filesystem-like interaction with all ATLAS data a plugin to the DMLite software stack has been developed. It is possible to mount Rucio as a...
Dr
WooJin Park
(KIT)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The GridKa computing center, hosted by Steinbuch Centre for Computing at the Karlsruhe Institute for Technology (KIT) in Germany, is serving as the largest Tier-1 center used by the ALICE collaboration at the LHC. In 2013, GridKa provides 30k HEPSEPC06, 2.7 PB of disk space, and 5.25 PB of tape storage to ALICE. The 10Gbit/s network connections from GridKa to CERN, several Tier-1 centers and...
Norman Anthony Graf
(SLAC National Accelerator Laboratory (US))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The International Linear Collider (ILC) physics and detector
community recently completed an exercise to demonstrate the
physics capabilities of detector concepts. The Detailed
Baseline Design (DBD) involved the generation, simulation, reconstruction and analysis of large samples of Monte Carlo datasets. The detector simulations utilized extremely detailed Geant4 implementations of...
Thomas Baron
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
For a long time HEP has been ahead of the curve in its usage of remote collaboration tools, like videoconference and webcast, while the local CERN collaboration facilities were somewhat behind the expected quality standards for various reasons. This time is now over with the creation by the CERN IT department in 2012 of an integrated conference room service which provides guidance and...
Mr
Massimo Sgaravatto
(INFN Padova)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The Legnaro-Padova Tier-2 is a computing facility serving the
ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if
available.
The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different...
Sandra Saornil Gamarra
(Universitaet Zuerich (CH))
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The experiment control system of the LHCb experiment is continuously evolving and improving. The guidelines and structure initially defined are kept, and more common tools are made available to all sub-detectors. Although the main system control is mostly integrated and actions are executed in common for the whole LHCb experiment, there is some degree of freedom for each sub-system to...
Sebastian Neubert
(CERN)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The LHCb experiment is a spectrometer dedicated to the study of heavy flavor
at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but
resource limitations mean that only 5 kHz can be written to storage for offline analytsis.
For this reason the LHCb data acquisition system -- trigger -- plays a key role in
selecting signal events and rejecting background. In contrast to...
Pierrick Hanlet
(Illinois Institute of Technology)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The Muon Ionization Cooling Experiment (MICE) is a demonstration
experiment to prove the feasibility of cooling a beam of muons for
use in a Neutrino Factory and/or Muon Collider. The MICE cooling
channel is a section of a modified Study II cooling channel which
will provide a 10% reduction in beam emittance. In order to ensure a
reliable measurement, MICE will measure the beam emittance...
Joern Mahlstedt
(NIKHEF (NL))
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The LHC is the world's highest energy and luminosity proton-proton (p-p) collider. During 2012 luminosities neared 10^34 cm-2 s-1, with bunch crossings occurring every 50 ns. The online event selection system of the ATLAS detector must reduce the event recording rate to only a few hundred Hz and, at the same time, selecting events considered interesting. This presentation will specifically...
Pierrick Hanlet
(Illinois Institute of Technology)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino Factory or Muon Collider. In order to measure the change in beam emittance, MICE is equipped with a pair of high precision scintillating fibre trackers. The trackers are required to measure a 10% change in...
Daniele Francesco Kruse
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Physics data stored in CERN tapes is quickly reaching the 100 PB milestone. Tape is an ever-changing technology that is still following Moore's law in terms of capacity. This means we can store every year more and more data in the same amount of tapes. However this doesn't come for free: the first obvious cost is the new higher capacity media. The second less known cost is related to moving...
Mr
Andrey SHEVEL
(Petersburg Nuclear Physics Institute)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
A small physics group (3-15 persons) might use a number of computing facilities for the analysis/simulation, developing/testing, teaching. It is discussed different types of computing facilities: collaboration computing facilities, group local computing cluster (including colocation), cloud computing. The author discuss the growing variety of different computing options for small groups and...
Bob Cowles
(BrightLite Information Security)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
As HEP collaborations grow in size (10 years ago, BaBar was 600 scientists; now, both CMS and ATLAS are on the order of 3000 scientists), the collaboratory has become a key factor in allowing identity management (IdM), once confined to individual sites, to scale with the number of members, number of organizations, and the complexity of the science collaborations. Over the past two decades (at...
Jason Webb
(Brookhaven National Lab)
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The STAR experiment pursues a broad range of physics topics in pp, pA and AA collisions produced by the Relativistic Heavy Ion Collider (RHIC). Such a diverse experimental program demands a simulation framework capable of supporting an equally diverse set of event generators, and a flexible event record capable of storing the (common) particle-wise and (varied) event-wise information provided...
Oliver Holme
(ETH Zurich, Switzerland)
10/14/13, 3:00 PM
Data acquisition, trigger and controls
Poster presentation
The Electromagnetic Calorimeter (ECAL) is one of the sub-detectors of the Compact Muon Solenoid (CMS) experiment of the Large Hadron Collider (LHC) at CERN. The Detector Control System (DCS) that has been developed and implemented for the CMS ECAL was deployed in accordance with the LHC schedule and has been supporting the detector data-taking since LHC physics runs started in 2009. During...
Andrew David Lahiff
(STFC - Science & Technology Facilities Council (GB))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
While migration from the grid to the cloud has been gaining increasing momentum in recent times, WLCG sites are currently still expected to accept grid job submission, and this is likely to continue for the foreseeable future. Furthermore, sites which support multiple experiments may need to provide both cloud and grid-based access to resources for some time, as not all experiments may be...
Shaun De Witt
(STFC - Science & Technology Facilities Council (GB))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
LHC experiments are moving away from a traditional HSM solution for Tier 1's in order to separate long term tape archival from disk only access, using the tape as a true archive (write once, read rarely). In this poster we present two methods by which this is being achieved at two distinct sites, ASGC and RAL, which have approached this change in very different ways.
Robert Fay
(University of Liverpool)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
As the number of cores on chip continues to trend upwards and new CPU architectures emerge, increasing CPU density and diversity presents multiple challenges to site administrators.
These include scheduling for massively multi-core systems (potentially including GPU (integrated and dedicated) and many integrated core (MIC)) to ensure a balanced throughput of jobs while preserving overall...
Daniel Hugo Campora Perez
(CERN)
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The LHCb Online Network is a real time high performance network, in which 350 data sources send data over a Gigabit Ethernet LAN to more than 1500 receiving nodes. The aggregated throughput of the application, called Event Building, is more than 60 GB/s. The protocol employed by LHCb makes the sending nodes transmit simultaneously portions of events to one receiving node at a time, which is...
Dr
Daniel van der Ster
(CERN), Dr
Jakub Moscicki
(CERN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
AFS is a mature and reliable storage service at CERN, having worked for more than 20 years as the provider of Linux home directories and application areas. Recently, our AFS service has been growing at unprecedented rates (300% in the past year), thanks to innovations in both the hardware and software components of our file servers.
This work will present how AFS is used at CERN and how...
Daniele Gregori
(Istituto Nazionale di Fisica Nucleare (INFN)),
Luca dell'Agnello
(INFN-CNAF),
Pier Paolo Ricci
(INFN CNAF),
Tommaso Boccali
(Sezione di Pisa (IT)), Dr
Vincenzo Vagnoni
(INFN Bologna), Dr
Vladimir Sapunenko
(INFN)
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
The Mass Storage System installed at the INFN CNAF Tier-1 is one of the biggest hierarchical storage facilities in Europe. It currently provides storage resources for about 12% of all LHC data, as well as to other High Energy Physics experiments.
The Grid Enabled Mass Storage System (GEMSS) is the present solution implemented at the INFN CNAF Tier-1 and it is based on a custom integration...
Ivan Antoniev Dzhunov
(University of Sofia)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Given the distributed nature of the grid and the way CPU resources are pledged and scared around the globe, VO's are facing the challenge to monitor the use of these resources. For CMS and the operation of centralized workflows the monitoring of how many production jobs are running and pending in the Glidein WMS production pools is very important. The Dashboard SSB (Site Status Board) provides...
Dr
Tomoaki Nakamura
(University of Tokyo (JP))
10/14/13, 3:00 PM
Facilities, Production Infrastructures, Networking and Collaborative Tools
Poster presentation
The Tokyo Tier2 center, which is located at International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hard wares as the third system...
Jetendr Shamdasani
(University of the West of England (GB))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Efficient, distributed and complex software is central in the analysis of high energy physics (HEP) data. One area that has been somewhat overlooked in recent years has been the tracking of the development of the HEP software and of its use in data analyses and its evolution over time. This area of tracking analyses to provide records of actions performed, outcomes achieved and (re-)design...
Daniele Francesco Kruse
(CERN)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Administrating a large-scale, multi-protocol, hierarchical tape storage infrastructure like the one at CERN, which stores around 30PB / year, requires an adequate monitoring system for quick spotting of malfunctions, easier debugging and on demand report generation. The main challenges for such system are: to cope with log format diversity and its information scattered among several log files,...
Morten Dam Joergensen
(Niels Bohr Institute (DK))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The ATLAS offline data quality monitoring infrastructure functioned successfully during the 2010-2012 run of the LHC. During the 2013-14 long shutdown, a large number of upgrades will be made in response to user needs and to take advantage of new technologies - for example, deploying richer web applications, improving dynamic visualization of data, streamlining configuration, and moving...
Rahmat Rahmat
(University of Mississippi (US))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
HFGFlash is a very fast simulation of electromagnetic showers using parameterizations of the profiles in Hadronic Forward Calorimeter. HF GFlash has good agreement to Collision Data and previous Test Beam results. In addition to good agreement to Data and previous Test Beam results, HFGFlash can simulate about 10000 times faster than Geant4. We will report the latest development of HFGFlash...
Robin Eamonn Long
(Lancaster University (GB))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The need to maximize computing facilities whilst maintaining versatile and flexible setups leads to the need for on demand virtual machines through the use of cloud computing. GridPP is currently investigating the role that Cloud Computing, in the form of Virtual Machines, can play in supporting Particle Physics analyses. As part of this research we look at the ability of VMWare's ESXi...
Igor Sfiligoi
(University of California San Diego)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
Monitoring is an important aspect of any job scheduling environment, and Grid computing is no exception. Writing quality monitoring tools is however a hard proposition, so the Open Science Grid decided to leverage existing enterprise-class tools in the context of the glideinWMS pilot infrastructure, which powers a large fraction of its Grid computing. The product chosen is the CycleServer,...
Carl Henrik Ohman
(Uppsala University (SE))
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. With the new cloud technologies come also new challenges, and one such is the contextualization of cloud resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google...
Igor Sfiligoi
(University of California San Diego)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The HTCondor based glideinWMS has become the product of choice for exploiting Grid resources for many communities. Unfortunately, its default operational model expects users to log into a machine running a HTCondor schedd before being able to submit their jobs. Many users would instead prefer to use their local workstation for everything.
A product that addresses this problem is rcondor, a...
Antanas Norkus
(Vilnius University (LT))
10/14/13, 3:00 PM
Event Processing, Simulation and Analysis
Poster presentation
The scrutiny and validation of the software and of the calibrations used to simulate and reconstruct the collision events, have been key elements to the physics performance of the CMS experiment.
Such scrutiny is performed in stages by approximately one hundred experts who master specific areas of expertise, ranging from the low-level reconstruction and calibration which specific to a...
Stephen Jones
(Liverpool University)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
VomsSnooper is a tool that provides an easy way to keep documents and sites up to date with the newest VOMS records from the Operations Portal, and removes the need for manual edits to security configuration files.
Yaim is used to configure the middle-ware at grid sites. Specifically, Yaim processes variables that define which VOMS services are used to authenticate users of any VO. The data...
Alexandre Beche
(CERN),
David Tuckett
(CERN)
10/14/13, 3:00 PM
Distributed Processing and Data Handling A: Infrastructure, Sites, and Virtualization
Poster presentation
The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the...
Matevz Tadel
(Univ. of California San Diego (US))
10/14/13, 3:00 PM
Data Stores, Data Bases, and Storage Systems
Poster presentation
Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already...