Dr
Torsten Antoni
(KIT - Karlsruhe Institute of Technology (DE))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
After a long period of project-based funding,during which the improvement of the services provided to the user communities was the main focus, distributed computing infrastructures (DCIs), having reached and established production quality, now need to tackle the issue of long-term sustainability.
With the transition from EGEE to EGI in 2010 the major part of the responsibility (especially...
Ramiro Voicu
(California Institute of Technology (US))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Current network technologies like dynamic network circuits and emerging protocols like OpenFlow, enable the network as an active component in the context of data transfers.
We present framework which provides a simple interface for scientists to move data between sites over Wide Area Network with bandwidth guarantees. Although the system hides the complexity from the end users, it was...
Marco Bencivenni
(INFN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
One of the main barriers against Grid widespread adoption in scientific communities stems from the intrinsic complexity of handling X.509 certificates, which represent the foundation of the Grid security stack.
To hide this complexity, in recent years, several Grid portals have been proposed which, however, do not completely solve the problem, either requiring that users manage their own...
Daniele Spiga
(CERN),
Hassen Riahi
(Universita e INFN (IT)),
Mattia Cinquilli
(Univ. of California San Diego (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The CMS distributed data analysis workflow assumes that jobs run in a different location to where their results are finally stored. Typically the user output must be transferred across the network from one site to another, possibly on a different continent or over links not necessarily validated for high bandwidth/high reliability transfer. This step is named stage-out and in CMS was...
Andrea Cristofori
(INFN-CNAF, IGI)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The accounting activity in a production computing Grid is of paramount importance in
order to understand the utilization of the available resources. While
several CPU accounting systems are deployed within the European Grid Infrastructure (EGI),
storage accounting systems, that are stable enough
to be adopted on a production environment, are not yet available.
A growing interest is being...
Costin Grigoras
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Since the ALICE experiment began data taking in late 2009, the amount of end user jobs on the AliEn Grid has increased significantly. Presently 1/3 of the 30K CPU cores available to ALICE are occupied by jobs submitted by about 400 distinct users. The overall stability of the AliEn middleware has been excellent throughout the 2 years of running, but the massive amount of end-user analysis and...
Rapolas Kaselis
(Vilnius University (LT))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The goal for CMS computing is to maximise the throughput of simulated event generation while also processing the real data events as quickly and reliably as possible. To maintain this achievement as the quantity of events increases, since the beginning of 2011 CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent...
Dr
Alex Martin
(QUEEN MARY, UNIVERSITY OF LONDON),
Christopher John Walker
(University of London (GB))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
We describe a low-cost Petabyte scale Lustre filesystem deployed for High Energy Physics. The use of commodity storage arrays and bonded ethernet interconnects makes the array cost effective, whilst providing high bandwidth to the storage. The filesystem is a POSIX filesytem, presented to the Grid using the StoRM SRM. The system is highly modular. The building blocks
of the array, the...
Sergey Panitkin
(Brookhaven National Laboratory (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis and discuss changes that...
Ms
qiulan huang
(Institute of High Energy Physics, Beijing)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Entering information industry, the most new technologies talked about are virtualization and cloud computing. Virtualization makes the heterogeneous resources transparent to users, and plays a huge role in large-scale data center management solutions. Cloud computing emerges as a revolution in computing science which bases on virtualization, demonstrating a gigantic advantage in resource...
Alexey Anisenkov
(Budker Institute of Nuclear Physics (RU))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ATLAS Grid Information System (AGIS) centrally stores and exposes static, dynamic and configuration parameters required to configure and to operate ATLAS distributed
computing systems and services. AGIS is designed to integrate information about resources, services and topology of the ATLAS grid infrastructure from various independent sources including BDII, GOCDB, the ATLAS data...
Zdenek Maxa
(California Institute of Technology (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
WMAgent is the core component of the CMS workload management system. One of the features of this job managing platform is a configurable messaging system aimed at generating, distributing and processing alerts: short messages describing a given alert-worthy informational or pathological condition. Apart from the framework's sub-components running within the WMAgent instances, there is a...
Dr
Christopher Jung
(KIT - Karlsruhe Institute of Technology (DE))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The GridKa center at the Karlsruhe Institute for Technology is the largest ALICE Tier-1 center. It hosts 40,000 HEPSEPC'06, approximately 2.75 PB of disk space and 5.25 PB of tape space for for A Large Ion Collider Experiment (ALICE), at the CERN LHC. These resources are accessed via the AliEn middleware. The storage is divided into two instances, both using the storage middleware xrootd.
We...
Pablo Saiz
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The AliEn workload management system is based on a central job queue wich holds all tasks that have to be executed. The job brokering model itself is based on pilot jobs: the system submits generic pilots to the compuiting centres batch gateways, and the assignment of a real job is done only when the pilot wakes up on the worker node. The model facilitates
a flexible fair share user job...
Dr
Dagmar Adamova
(Nuclear Physics Institute of the AS CR Prague/Rez), Mr
Jiri Horky
(Institute of Physics of the AS CR Prague)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
ALICE, as well as the other experiments at the CERN LHC, has been building a distributed data management infrastructure since 2002. Experience gained during years of operations with different types of storage managers deployed over this infrastructure has shown that the most adequate storage solution for ALICE is the native XRootD manager developed within a CERN - SLAC collaboration. The...
Laura Tosoratto
(INFN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The emerging of hybrid GPU-accelerated clusters in the supercomputing landscape is a matter of fact.
In this framework we proposed a new INFN initiative, the QUonG project, aiming to deploy a high performance computing system dedicated to scientific computations leveraging on commodity multi-core processors coupled with last generation GPUs.
The multi-node interconnection system is based on...
Mr
Martin Gasthuber
(Deutsches Elektronen-Synchrotron (DE))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
DESY has started to deploy modern, state of the art, industry based, scale out file services together with certain extension as a key component in dedicated LHC analysis environments like the National Analysis Facility (NAF) @DESY. In a technical cooperation with IBM, we will add identified critical features to the standard SONAS product line of IBM to make the system best suited for the...
Sergey Kalinin
(Bergische Universitaet Wuppertal (DE))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Job Execution Monitor (JEM), a job-centric grid job monitoring software, is actively developed at the University of Wuppertal. It leverages Grid-based physics analysis and Monte Carlo event production for the ATLAS experiment by monitoring job progress and grid worker node health. Using message passing techniques, the gathered data can be supervised in real time by users, site admins and...
Luisa Arrabito
(IN2P3/LUPM on behalf of the CTA Consortium)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Cherenkov Telescope Array (CTA) – an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale – is the next generation instrument in the field of very high energy gamma-ray astronomy.
CTA will operate as an open observatory providing data products to the scientific community. An average data stream of some GB/s for about 1000 hours of observation...
Mikhail Titov
(University of Texas at Arlington (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Efficient distribution of physics data over ATLAS grid sites is one of the most important tasks for user data processing. ATLAS' initial static data distribution model over-replicated some unpopular data and under-replicated popular data, creating heavy disk space loads while under-utilizing some processing resources due to low data availability. Thus, a new data distribution mechanism was...
Jaroslava Schovancova
(Acad. of Sciences of the Czech Rep. (CZ))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
This talk details variety of Monitoring tools used within the ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the Tier-0 facility at CERN after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centers distributed world-wide. We present an...
Graeme Andrew Stewart
(CERN), Dr
Stephane Jezequel
(LAPP)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
This paper will summarize operational experience and improvements in ATLAS computing infrastructure during 2010 and 2011.
ATLAS has had 2 periods of data taking, with many more events recorded in 2011 than in 2010. It ran 3 major reprocessing campaigns. The activity in 2011 was similar to that in 2010, but scalability issues had to be adressed due to the increase in luminosity and trigger...
Jaroslava Schovancova
(Acad. of Sciences of the Czech Rep. (CZ))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
ATLAS Distributed Computing organized 3 teams to support data processing at Tier-0 facility at CERN, data reprocessing, data management operations, Monte Carlo simulation production, and physics analysis at the ATLAS computing centers located world-wide. In this talk we describe how these teams ensure that the ATLAS experiment data is delivered to the ATLAS physicists in a timely manner in the...
Danila Oleynik
(Joint Inst. for Nuclear Research (RU))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ATLAS Distributed Data Management project DQ2 is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model.
The DQ2 deletion service is one of the most important DDM services. This distributed service interacts with 3rd...
Pavel Nevski
(Brookhaven National Laboratory (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The production system for Grid Data Processing (GDP) handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities.
The system provides knowledge...
Laura Sargsyan
(A.I. Alikhanyan National Scientific Laboratory (AM))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs.
Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring.
Dashboard job monitoring for ATLAS combines information from the Panda job processing...
Danila Oleynik
(Joint Inst. for Nuclear Research (RU))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to...
Collaboration Atlas
(Atlas)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ATLAS Distributed Computing (ADC) project delivers production quality tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with large contingency the needed computing activities in the first years of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges....
Mr
Erekle Magradze
(Georg-August-Universitaet Goettingen (DE))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance.
The ATLAS experiment intensively uses SSB for the distributed computing shifts, for estimating data processing and...
Mr
James Pryor
(Brookhaven National Laboratory)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Cobbler is a network-based Linux installation server, which, via a choice of web or CLI tools, glues together PXE/DHCP/TFTP and automates many associated deployment tasks. It empowers a facility's systems administrators to write scriptable and modular code, which can pilot the OS installation routine to proceed unattended and automatically, even across heterogeneous hardware. These tools make...
Dr
Jose Caballero Bejar
(Brookhaven National Laboratory (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ATLAS experiment at the CERN LHC is one of the largest users of grid computing infrastructure, which is a central part of the experiment's computing operations. Considerable efforts have been made to use grid technology in the most efficient
and effective way, including the use of a pilot job based workload management framework.
In this model the experiment submits 'pilot' jobs to sites...
Dr
Xiaomei Zhang
(IHEP, China)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
A job submission and management tool is one of the necessary components in any distributed computing system. Such a tool should provide a user-friendly interface for physics production group and ordinary analysis users to access heterogeneous computing resources, without requiring knowledge of the underlying grid middleware. Ganga, with its common framework and customizable plug-in structure,...
Paul Rossman
(Fermi National Accelerator Laboratory (FNAL))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
In addition to the physics data generated each day from the CMS detector, the experiment also generates vast quantities of supplementary log data. From reprocessing logs to transfer logs this data could shed light on operational issues and assist with reducing inefficiencies and eliminating errors if properly stored, aggregated and analyzed. The term "big data" has recently taken the spotlight...
Alvaro Gonzalez Alvarez
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Since a couple of years, a team at CERN and partners from the Citizen Cyberscience Centre (CCC) have been working on a project that enables general physics simulation programs to run in a virtual machine on volunteer PCs around the world. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework. Based on CERNVM and the job management framework Co-Pilot, this...
Dr
David Crooks
(University of Glasgow/GridPP)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
This presentation will cover the work conducted within the ScotGrid Glasgow Tier-2 site. It will focus on the multi-tiered network security architecture developed on the site to augment Grid site server security and will discuss the variety of techniques used including the utilisation of Intrusion Detection systems, logging and optimising network connectivity within the...
Martin Sevior
(University of Melbourne (AU))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The experimental high energy physics group at the University of Melbourne is a member of the ATLAS, Belle and Belle II collaborations. We maintain a local data centre which enables users to test pre-production code and to do final stage data analysis. Recently the Australian National eResearch Collaboration Tools and Resources (NeCTAR) organisation implemented a Research Cloud based on...
Giacinto Donvito
(Universita e INFN (IT))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
A Consortium between four LHC Computing Centers (Bari, Milano, Pisa
and Trieste) has been formed in 2010 to prototype Analysis-oriented
facilities for CMS data analysis, using a grant from the Italian
Ministry of Research. The Consortium aims to the realization of an
ad-hoc infrastructure to ease the analysis activities on the huge data
set collected by the CMS Experiment, at the LHC...
Derek John Weitzel
(University of Nebraska (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
It is common at research institutions to maintain multiple clusters that represent different owners or generations of hardware, or that fulfill different needs and policies. Many of these clusters are consistently under utilized while researchers on campus could greatly benefit from these unused capabilities. By leveraging principles from the Open Science Grid it is now possible to utilize...
Georgiana Lavinia Darlea
(Polytechnic University of Bucharest (RO))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
In the ATLAS Online computing farm, the majority of the systems are network booted - they run an operating system image provided via network by a Local File Server. This method guarantees the uniformity of the farm and allows very fast recovery in case of issues to the local scratch disks. The farm is not homogeneous and in order to manage the diversity of roles, functionality and hardware of...
Mr
Steffen Schreiner
(CERN, CASED/TU Darmstadt)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Grid computing infrastructures need to provide traceability and accounting of their users’ activity and protection against misuse and privilege escalation, where the delegation of privileges in the course of a job submission is a key concern. This work describes an improved handling of multi-user Grid jobs in the ALICE Grid Services.
A security analysis of the ALICE Grid job model is...
Neng Xu
(University of Wisconsin (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
With the start-up of the LHC in 2009, more and more data analysis facilities have been built or enlarged at Universities and laboratories. In the mean time, new technologies, like Cloud computing and Web3D, and new types of hardware, like smartphones and tablets, have become available and popular in the market. Is there a way to integrate them into the existing data analysis models and allow...
Prof.
Sudhir Malik
(University of Nebraska-Lincoln)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The CMS Analysis Tools model has now been used robustly in a plethora of physics papers. This model is examined to investigate successes and failures as seen by the analysts of recent papers.
Kenneth Bloom
(University of Nebraska (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
After years of development, the CMS distributed computing system is now in full operation. The LHC continues to set records for instantaneous luminosity, and CMS records data at 300 Hz. Because of the intensity of the beams, there are multiple proton-proton interactions per beam crossing, leading to larger and larger event sizes and processing times. The CMS computing system has responded...
Pablo Saiz
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Collaborative development proved to be a key of the success of the Dashboard Site Status Board (SSB) which is heavily used by ATLAS and CMS for the computing shifts and site commissioning activities.
The Dashboard Site Status Board (SSB) is an application that enables Virtual Organisation (VO) administrators to monitor the status of distributed sites. The selection, significance and...
Boris Wagner
(University of Bergen (NO))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Nordic Tier-1 for LHC is distributed over several, sometimes smaller, computing centers. In order to minimize administration effort, we are interested in running different grid jobs over one common grid middleware. ARC is selected as the internal middleware in the Nordic Tier-1. At the moment ARC has no
mechanism of automatic software packaging and deployment. The AliEn grid middleware,...
Niko Neufeld
(CERN),
Vijay Kartik Subbiah
(CERN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
This paper describes the investigative study undertaken to evaluate shared filesystem performance and suitability in the LHCb Online environment. Particular focus is given to the measurements and field tests designed and performed on an in-house AFS setup, and related comparisons with NFSv3 and pNFS are presented. The motivation for the investigation and the test setup arises from the need to...
Andreas Heiss
(KIT - Karlsruhe Institute of Technology (DE))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
GridKa, operated by the Steinbuch Centre for Computing at KIT, is the German regional centre for high energy and
astroparticle physics computing, supporting currently 10 experiments and serving as a Tier-1 centre for the four LHC
experiments. Since the beginning of the project in 2002, the total compute power is upgraded at least once per year to follow
the increasing demands of the...
Robert Snihur
(University of Nebraska (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
There are approximately 60 Tier-3 computing sites located on campuses of collaborating institutions in CMS. We describe the function and architecture of these sites, and illustrate the range of hardware and software options. A primary purpose is to provide a platform for local users to analyze LHC data, but they are also used opportunistically for data production. While Tier-3 sites vary...
Anar Manafov
(GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Constant changes in computational infrastructure like the current interest in Clouds, imply conditions on the design of applications. We must make sure that our analysis infrastructure, including source code and supporting tools, is ready for the on demand computing (ODC) era.
This presentation is about a new analysis concept, which is driven by users needs, completely disentangled from...
Dimitri Nilsen
(Karlsruhe Institute of Technology (KIT)), Dr
Pavel Weber
(Karlsruhe Institute of Technology (KIT))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
GridKa is a computing centre located in Karlsruhe. It serves as Tier-1 centre for the four LHC experiments and also provides its computing and storage resources for other non-LHC HEP and astroparticle physics experiments as well as for several communities of the German Grid Initiative D-Grid.
The middleware layer at GridKa comprises three main flavours: Globus, gLite and UNICORE. This...
Elisa Lanciotti
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
In the distributed computing model of WLCG Grid Storage Elements (SE) are by construction completely decoupled from the File Catalogs (FC) where the experiment's files are registered. On the basis of the experience of managing large volumes of data in such environment, inconsistencies have often happened either causing a waste of disk space, in case the data were deleted from the FC, but still...
Mr
Igor Sfiligoi
(University of California San Diego)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The CMS analysis computing model was always relying on jobs running near the data, with data allocation between CMS compute centers organized at management level, based on expected needs of the CMS community. While this model provided high CPU utilization during job run times, there were times when a large fraction of CPUs at certain sites were sitting idle due to lack of demand, all while...
Daniele Spiga
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
In CMS Computing the highest priorities for analysis tools are the improvement of the end users' ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as the data and simulation processing. This strategy foresees...
Mr
Massimo Sgaravatto
(Universita e INFN (IT))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The European Middleware Initiative (EMI) project aims to deliver a consolidated set of middleware products based on the four major middleware
providers in Europe - ARC, dCache, gLite and UNICORE.
The CREAM (Computing Resource Execution And Management) Service, a service for job management operation at the Computing Element (CE) level, is one of the software product part of the EMI...
Marco Caberletti
(Istituto Nazionale Fisica Nucleare (IT))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The extensive use of virtualization technologies in cloud environments has created the need for a new network access layer residing on hosts and connecting the various Virtual Machines (VMs). In fact, massive deployment of virtualized environments imposes requirements on networking for which traditional models are not well suited. For example, hundreds of users issuing cloud requests for which...
Dr
Ivan Logashenko
(Budker Institute Of Nuclear Physics)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Super Charm–Tau Factory (CTF) is a future electron-positron collider with center-of-mass energy range from 2 to 5 GeV and unprecedented for this energy range peak luminosity of about 10**35 cm−2s−1. The project of CTF is being developed in the Budker Institute of Nuclear Physics (Novosibirsk, Russia). The main
goal of experiments at Super Charm-Tau Factory is a study of the processes with...
Natalia Ratnikova
(KIT - Karlsruhe Institute of Technology (DE))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
All major experiments at Large Hadron Collider (LHC) need to measure real storage usage at the Grid sites. This information is equally important for the resource management, planning, and operations.
To verify consistency of the central catalogs, experiments are asking sites to provide full list of files they have on storage, including size, checksum, and other file attributes. Such...
Mr
haifeng pi
(CMS)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
As part of the Advanced Networking Initiative (ANI) of ESnet, we exercise
a prototype 100Gb network infrastructure for data transfer and processing for
OSG HEP applications.
We present results of these tests.
Mr
Andreas Petzold
(KIT)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
In 2012 the GridKa Tier-1 computing center hosts 130kHEPSPEC06 computing resources and 11PB disk and 17.7PB tape space. These resources are shared between the four LHC VOs and a number of national and international VOs from high energy physics and other sciences. CernVM-FS has been deployed at GridKa to supplement the existing NFS-based system to access VO software on the worker nodes. It...
Dr
Vincenzo Capone
(Universita e INFN (IT))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of...
Maxim Potekhin
(Brookhaven National Laboratory (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the...
Dr
Giacinto Donvito
(INFN-Bari)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1.
In this work we will present our...
Dr
Andrei Tsaregorodtsev
(Universite d'Aix - Marseille II (FR))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
File replica and metadata catalogs are essential parts of any distributed data management system, which are largely determining its functionality and performance. A new File Catalog (DFC) was developed in the framework of the DIRAC Project that combines both replica and metadata catalog functionality. The DFC design is based on the practical experience with the data management system of the...
Adrian Casajus Ramo
(University of Barcelona (ES))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The DIRAC framework for distributed computing has been designed as a flexible and modular solution that can be adapted to the requirements of any community. Users interact with DIRAC via command line, using the web portal or accessing resources via the DIRAC python API. The current DIRAC API requires users to use a python version valid for DIRAC.
Some communities have developed their own...
Artur Jerzy Barczyk
(California Institute of Technology (US)),
Ian Gable
(University of Victoria (CA))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
For the Super Computing 2011 conference in Seattle, Washington, a 100 Gb/s connection was established between the California Institute of Technology conference booth and the University of Victoria.
A small team performed disk to disk data transfers between the two sites nearing 100 Gb/s, using only a small set of properly
configured transfer servers equipped with SSD drives.The circuit...
Johannes Elmsheuser
(Ludwig-Maximilians-Univ. Muenchen (DE))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ATLAS experiment at the LHC at CERN is recording and simulating
several 10's of PetaBytes of data per year. To analyse these data the
ATLAS experiment has developed and operates a mature and stable
distributed analysis (DA) service on the Worldwide LHC Computing Grid.
The service is actively used: more than 1400 users have submitted jobs
in the year 2011 and a total of more 1 million...
Wojciech Lapka
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO...
Ricardo Brito Da Rocha
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure.
It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During...
Fabrizio Furano
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
A number of storage elements now offer standard protocol interfaces like NFS 4.1/pNFS and WebDAV, for access to their data repositories, in line with the standardization effort of the European Middleware Initiative (EMI). Here we report on work which seeks to exploit the federation potential of these protocols and build a system which offers a unique view of the storage ensemble and the...
Cinzia Luzzi
(CERN - University of Ferrara)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ALICE collaboration has developed a production environment (AliEn) that implements several components of the Grid paradigm needed to simulate, reconstruct and analyze data in a distributed way.
In addition to the Grid-like analysis, ALICE, as many experiments, provides a local interactive analysis using the Parallel ROOT Facility (PROOF).
PROOF is part of the ROOT analysis framework...
Mr
Maxim Grigoriev
(Fermilab)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The LHC computing model relies on intensive network data transfers.
The E-Center is a social collaborative web based platform for Wide Area
network users. It is designed to give user all required tools to
isolate, identify and resolve any network performance related
problem.
Cyril L'Orphelin
(CNRS/IN2P3),
Daniel Kouril
(Unknown), Dr
Mingchao Ma
(STFC - Rutherford Appleton Laboratory)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Operations Portal is a central service being used to support operations in the European Grid Infrastructure: a collaboration of National Grid Initiatives (NGIs) and several European International Research Organizations (EIROs). The EGI Operation Portal is providing a single access point to operational information gathered from various sources such as site topology database, monitoring...
Emidlo Giorgio
(Istituto Nazionale Fisica Nucleare (IT)),
giuseppina salente
(INFN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The EMI project intends to receive or rent an exhibition spot nearby the main and visible areas of the event (such as coffee-break areas), to exhibit the projects goals and the latest achievements, such as the EMI1 release.
The means used will be posters, video and distribution of flyers, sheets or brochures. It would be useful to have a 2x3 booth with panels available to post on posters, and...
Jon Kerr Nilsen
(University of Oslo (NO))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
To manage data in the grid, with its jungle of protocols and enormous amount of data in different storage solutions, it is important to have a strong, versatile and reliable data management library. While there are several data management tools and libraries available, they all have different strengths and weaknesses, and it can be hard to decide which tool to use for which purpose.
EMI is...
Elisabetta Vilucchi
(Istituto Nazionale Fisica Nucleare (IT)),
Roberto Di Nardo
(Istituto Nazionale Fisica Nucleare (IT))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
In the ATLAS computing model, Tier2 resources are intended for MC productions and end-user analyses activities. These resources are usually exploited via the standard GRID resource management tools, which are de facto a high level interface to the underlying batch systems managing the contributing clusters. While this is working as expected, there are user-cases where a more dynamic usage of...
Mr
Mark Mitchell
(University of Glasgow)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Due to the changes occurring within the IPv4 address space, the utilisation of IPv6 within Grid Technologies and other IT infrastructure is becoming a more pressing solution for IP addressing. The employment and deployment of this addressing scheme has been discussed widely both at the academic and commercial level for several years. The uptake is not as advanced as was predicted and the...
Ms
Silvia Amerio
(University of Padova & INFN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The CDF experiment at Fermilab ended its Run-II phase on September 2011 after 11 years of operations and 10 fb-1 of collected data.
CDF computing model is based on a Central Analysis Farm (CAF) consisting of local computing and
storage resources, supported by
OSG and LCG resources accessed through dedicated portals.
Recently a new portal, Eurogrid, has been developed to effectively...
David Cameron
(University of Oslo (NO))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Staging data to and from remote storage services on the Grid for users' jobs is a vital component of the ARC computing element. A new data staging framework for the computing element has recently been developed to address issues with the present framework, which has essentially remained unchanged since its original implementation 10
years ago. This new framework consists of an intelligent...
Dr
Andreas Peters
(CERN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
EOS is a new disk based storage system used in production at CERN since autumn 2011. It is implemented using the plug-in architecture of the XRootD software framework and allows remote file access via XRootD protocol or POSIX-like file access via FUSE mounting. EOS was designed to fulfill specific requirements of disk storage scalability and IO scheduling performance for LHC analysis use...
Tadashi Maeno
(Brookhaven National Laboratory (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The PanDA Production and Distributed Analysis System plays a key role in the ATLAS distributed computing infrastructure.
PanDA is the ATLAS workload management system for processing all Monte-Carlo simulation and data reprocessing jobs in addition to user and group analysis jobs. The system processes more than 5 million jobs in total per week, and more than 1400 users have submitted analysis...
Claudio Grandi
(INFN - Bologna)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Computing Model of the CMS experiment was prepared in 2005 and described in detail in the CMS Computing Technical Design Report. With the experience of the first years of LHC data taking and with the evolution of the available technologies, the CMS Collaboration identified areas where improvements were desirable. In this work we describe the most important modifications that have been, or...
Alexey Anisenkov
(Budker Institute of Nuclear Physics (RU))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational...
Simone Campana
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centers. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as...
Adrian Casajus Ramo
(University of Barcelona (ES))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
DIRAC framework for distributed computing has been designed as a group of collaborating components, agents and servers, with persistent database back-end. Components communicate with each other using DISET, an in-house protocol that provides Remote Procedure Call (RPC) and file transfer capabilities. This approach has provided DIRAC with a modular and stable design by enforcing stable...
Dr
ziyan Deng
(Institute of High Energy Physics, Beijing, China)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The BES III detector is a new spectrometer which works on the upgraded high-luminosity collider, the Beijing Electron-Positron Collider (BEPCII). The BES III experiment studies physics in the tau-charm energy region from 2GeV to 4.6GeV . Since spring 2009, BEPCII has produced large scale data samples. All the data samples were processed successfully and many important physics results have...
Rodney Walker
(Ludwig-Maximilians-Univ. Muenchen (DE))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Chirp is a distributed file system specifically designed for the wide area network, and developed by the University of Notre Dame CCL group. We describe the design features making it particularly suited to the Grid environment,
and to ATLAS use cases. The deployment and usage within ATLAS distributed computing are discussed, together with scaling tests and evaluation for the various use cases.
Diego Casadei
(New York University (US))
22/05/2012, 13:30
Event Processing (track 2)
Poster
After about two years of data taking with the ATLAS detector manifold experience with the custom-developed trigger monitoring and reprocessing infrastructure could be collected.
The trigger monitoring can be roughly divided into online and offline monitoring. The online monitoring calculates and displays all rates at every level of the trigger and evaluates up to 3000 data quality...
Pablo Saiz
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Experiment Dashboard system provides common solutions for monitoring job processing, data transfers and site/service usability. Over the last seven years, it proved to play a crucial role in the monitoring of the LHC computing activities, distributed sites and services.
It has been one of the key elements during the commissioning of the distributed computing systems of the LHC...
Steven Timm
(Fermilab)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
FermiCloud is an Infrastructure-as-a-Service facility deployed at Fermilab
based on OpenNebula that has been in production for more than a year.
FermiCloud supports a variety of production services on virtual machines
as well as hosting virtual machines that are used as development and
integration platforms. This infrastructure has also been used as a testbed for
commodity storage...
Steven Timm
(Fermilab)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
FermiGrid is the facility that provides the Fermilab Campus Grid
with unified job submission, authentication, authorization and
other ancillary services for the Fermilab scientific computing
stakeholders.
We have completed a program of work to make these services resilient
to high authorization request rates, as well as failures of building
or network infrastructure.
We will present...
Dr
Don Holmgren
(Fermilab)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
As part of the DOE LQCD-ext project, Fermilab designs, deploys, and operates dedicated high performance clusters for parallel lattice QCD (LQCD) computations. Multicore processors benefit LQCD simulations and have contributed to the steady decrease in price/performance for these calculations over the last decade. We currently operate two large conventional clusters, the older with over 6,800...
Caitriana Nicholson
(Graduate University of the Chinese Academy of Sciences)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The BES III experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPC II e+e- collider to study physics in the τ-charm energy region around 3.7 GeV; BEPC II has produced the world’s largest samples of J/ψ and ψ’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very...
Dr
oelg lodygensky
(LAL - IN2P3 - CNRS)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Desktop grid (DG) is a well known technology aggregating volunteer computing resources donated by individuals to dynamically construct a virtual cluster. A lot of efforts are done these last years to extend and interconnect desktop grids to other distributed computing resources, especially focusing on so called “service grids” middleware such as “gLite”, “ARC” and “Unicore”.
In the former...
Dr
Tony Wildish
(Princeton University)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
PhEDEx is the data-movement solution for CMS at the LHC. Created in 2004, it is now one of the longest-lived components of the CMS dataflow/workflow world.
As such, it has undergone significant evolution over time, and continues to evolve today, despite being a fully mature system. Originally a toolkit of agents and utilities dedicated to specific tasks, it is becoming a more open framework...
Adrien Devresse
(University of Nancy I (FR))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Grid File Access Library ( GFAL ) is a library designed for a universal and simple access to grid storage systems. Re-designed and re-written completely, the 2.0 version of GFAL provides a complete abstraction of the complexity and heterogeneity of the grid storage systems ( DPM, LFC, Dcache, Storm, arc, ...) and of the data management protocols ( RFIO, gsidcap, LFN, dcap, SRM,...
Mr
Igor Sfiligoi
(INFN LABORATORI NAZIONALI DI FRASCATI)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Multi-user pilot infrastructures provide significant advantages for the communities using them, but also create new security challenges.
With Grid authorization and mapping happening with the pilot credential only, final user identity is not properly addressed in the classic Grid paradigm.
In order to solve this problem, OSG and EGI have deployed glexec, a privileged executable on the worker...
Federico Stagni
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Within the DIRAC framework in the LHCb collaboration, we deployed an autonomous policy system acting as a central status information point for grid elements.
Experts working as grid administrators have a broad and very deep knowledge about the underlying system which makes them very precious. We have attempted to formalize this knowledge in an autonomous system able to aggregate information,...
Dr
Kilian Schwarz
(GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed.
Since 2002, GSI operates a Tier2 center for ALICE@CERN. The central component of the GSI computing facility...
Mr
Laurence Field
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The primary goal of a Grid information system is to display the current composition and state of a Grid infrastructure. It's purpose is to provide the information required for workload and data management. As these models evolve, the information system requirements need to be revisited and revised. This paper first documents the results from a recent survey of LHC VOs on the information system...
Bogdan Lobodzinski
(DESY)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The H1 Collaboration at HERA is now in the era
of high precision analyses based on the final
and complete data sample. A natural consequence
of this is the huge increase in requirement for
simulated Monte Carlo (MC) events. As a
response to this increase, a framework for
large scale MC production using the LCG Grid
Infrastructure was developed. After 3 years,
the H1 MC Computing...
Lukasz Kokoszkiewicz
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The hBrowse framework is a generic monitoring tool designed to meet the needs of various communities connected to grid computing. It is strongly configurable and easy to adjust and implement accordingly to a specific community needs. It's a html/JavaScript client side application utilizing the latest web technologies to provide presentation layer to any hierarchical data structures. Each part...
Olivier Raginel
(Massachusetts Inst. of Technology (US))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The CMS experiment online cluster consists of 2300 computers and170 switches or routers operating on a 24 hour basis. This huge infrastructure must be monitored in a way that the administrators are proactively warned of any failures or degradation in the system, in order to avoid or minimize downtime of the system which can lead to loss of data taking. The number of metrics monitored per host...
Miguel Coelho Dos Santos
(CERN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
With many servers and server parts the environment of warehouse sized data centers is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed.
To manage these changes better a project codenamed "hardware hound" focusing on hardware failure trending and hardware inventory has been started at CERN.
By creating and...
Dr
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
By the end of 2011, a number of US Department of Energy (DOE) National Laboratories will have access to a 100 Gb/s wide-area network backbone. The ESnet Advanced Networking Initiative (ANI) project is intended to develop a prototype network, based on emerging 100 Gb/s ethernet technology. The ANI network will support DOE’s science research programs. A 100 Gb/s network testbed is a key...
Mr
Miguel Villaplana Perez
(Universidad de Valencia (ES))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The ATLAS Tier3 at IFIC-Valencia is attached to a Tier2 that has 50% of the Spanish Federated Tier2 resources. In its design, the Tier3 includes a GRID-aware part that shares some of the features of Valencia's Tier2 such
as using Lustre as a file system. ATLAS users, 70% of IFIC's users, also have the possibility of analysing data with a PROOF farm and storing them locally.
In this...
Federica Legger
(Ludwig-Maximilians-Univ. Muenchen)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more to come in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation,...
Mr
Andrea Chierici
(INFN-CNAF)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
This work shows the optimizations we have been investigating and
implementing at the KVM virtualization layer in the INFN Tier-1 at
CNAF, based on more than a year of experience in running thousands of
virtual machines in a production environment used by several
international collaborations. These optimizations increase the
adaptability of virtualization solutions to demanding...
Mr
Pier Paolo Ricci
(INFN CNAF)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The INFN Tier1 at CNAF is the first level Italian High Energy Physics
computing center that shares resources to the scientific community using
the grid infrastructure. The Tier1 is composed of a very complex
infrastructure divided into different parts: the hardware layer, the
storage services, the computing resources (i.e. worker nodes adopted for
analysis and other activities) and...
Andrew Mcnab
(University of Manchester)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
We describe our experience of operating a large Tier-2 site since 2005 and how we have developed an integrated management system using third-party, open source components. This system tracks individual assets and records their attributes such as MAC and IP addresses; derives DNS and DHCP configurations from this database; creates each host's installation and re-configuration scripts; monitors...
Dr
Ana Y. Rodríguez-Marrero
(Instituto de Física de Cantabria (UC-CSIC))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
High Energy Physics (HEP) analysis are becoming more complex and demanding due to the large amount of data collected by the current experiments. The Parallel ROOT Facility (PROOF) provides researchers with an interactive tool to speed up the analysis of huge volumes of data by exploiting parallel processing on both multicore machines and computing clusters. The typical PROOF deployment...
Maxim Potekhin
(Brookhaven National Laboratory (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The PanDA Workload Management System is the basis for distributed production and analysis for the ATLAS experiment at the LHC. In this role, it relies on sophisticated dynamic data movement facilities developed in ATLAS.
In certain scenarios, such as small research teams in ATLAS Tier-3 sites and non-ATLAS Virtual Organizations supported by the Open Science Grid consortium (OSG), the overhead...
Albert Puig Navarro
(University of Barcelona (ES))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The gUSE (Grid User Support Environment) framework allows to create, store and distribute application workflows. This workflow architecture includes a wide variety of payload execution operations, such as loops, conditional execution of jobs and combination of output. These complex multi-job workflows can easily be created and modified by application developers through the WS-PGRADE portal....
Gabriele Garzoglio
(Fermi National Accelerator Laboratory)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies...
Tomas Kouba
(Acad. of Sciences of the Czech Rep. (CZ))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Computing Centre of the Institute of Physics in Prague provides computing and storage resources
for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates
more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT....
Stephen Gowdy
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The work is focused on the creation and validation tests of a replica and transfers system for Computational Grids
inspired on the needs of the High Energy Physics (HEP).
Due to the high volume of data created by the HEP experiments, an efficient file and dataset replica system may play
an important role on the computing model. Data replica systems allow the creation of copies,...
Michael John Kenyon
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Ganga is an easy-to-use frontend for the definition and management of analysis jobs, providing a uniform interface across multiple distributed computing systems. It is the main end-user distributed analysis tool for the ATLAS and LHCb experiments and provides the foundation layer for the HammerCloud sytem, used by the LHC experiments for validation and stress testing of their numerous...
Waseem Daher
(Oracle)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Today, every OS in the world requires regular reboots in order to be up to date and secure. Since reboots cause downtime and disruption, sysadmins are forced to choose between security and convenience.
Until Ksplice. Ksplice is new technology that can patch a kernel while the system is running, with no disruption whatsoever. We use this technology to provide Ksplice Uptrack, a service that...
Federico Stagni
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
We present LHCbDIRAC, an extension of the DIRAC community Grid solution to handle the LHCb specificities.
The DIRAC software has been developed for many years within LHCb only. Nowadays it is a generic software, used by many scientific communities worldwide. Each community wanting to take advantage of DIRAC has to develop an extension, containing all the necessary code for handling their...
Artem Harutyunyan
(CERN),
Dag Larsen
(University of Bergen (NO))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Long-term preservation of scientific data represents a challenge to all experiments. Even after an experiment has reached its end of life, it may be necessary to reprocess the data. There are two aspects of long-term data preservation: "data" and "software". While data can be preserved by migration, it is more complicated for the software. Preserving source code and binaries is not enough; the...
Dr
Ulrich Schwickerath
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
In 2008 CERN launched a project aiming at virtualising the batch farm. It strictly distinguishes between infrastructure and guests, and is thus able to serve, along with its initial batch farm target, as an IaaS infrastructure, which can be exposed to users. The system was put into production at small scale at Christmas 2010, and has since grown to almost 500 virtual machine slots in spring...
Dr
Stefan Roiser
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The increase of luminosity in the LHC during its second year of operation (2011) was achieved by delivering more protons per bunch and increasing the number of bunches. This change of running conditions required some changes in the LHCb Computing Model. The consequences of the higher pileup are a bigger event size and processing time but also the possibility for LHCb to propose and get...
Dr
Daniele Bonacorsi
(Universita e INFN (IT))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The LHCONE project aims to provide effective entry points into a network infrastructure that is intended to be private to the LHC Tiers. This infrastructure is not intended to replace the LHCOPN, which connects the highest tiers, but rather to complement it, addressing the connection needs of the LHC Tier-2 and Tier-3 sites which have become more important in the new less-hierarchical...
Dr
Xavier Espinal Curull
(Universitat Autònoma de Barcelona (ES))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Installation and post-installation mechanisms are critical points for the computing centres to streamline production services. Managing hundreds of nodes is a challenge for any computing centre and there are many tools able to cope with this problem. The desired features includes the ability to do incremental configuration (no need to bootstrap the service to make it manageable by the tool),...
Ioannis Charalampidis
(Aristotle Univ. of Thessaloniki (GR))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The creation and maintenance of a Virtual Machine (VM) is a complex process. To build the VM image, thousands of software packages have to be collected, disk images suitable for different hypervisors have to be built, integrity tests must be performed, and eventually the resulting images have to become available for download. In the meanwhile, software updates for the older versions must be...
Andrzej Nowak
(CERN openlab)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The continued progression of Moore’s law has led to many-core platforms becoming easily accessible commodity equipment. New opportunities that arose from this change have also brought new challenges: harnessing the raw potential of computation of such a platform is not always a straightforward task. This paper describes practical experience coming out of the work with many-core systems at CERN...
Prof.
Roger Jones
(Lancaster University (GB))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
MARDI-Gross builds on previous work with the LIGO collaboration, using the ATLAS experiment as a use case to develop a tool-kit on data management for people making proposals for large High Energy Physics experiments, as well a experiments such as LIGO and LOFAR, and also for those assessing such proposals. The toolkit will also be of interest to those in the active data management for new and...
Dr
Santiago Gonzalez De La Hoz
(IFIC-Valencia)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ATLAS computing and data models have moved/are moving away from the strict MONARC model (hierarchy) to a mesh model. Evolution of computing models also requires evolution of network infrastructure to enable any Tier2 and Tier3 to easily connect to any Tier1 or Tier2. In this way some changing of the data model are required:
a) Any site can replicate data from any other site.
b) Dynamic...
David Cameron
(University of Oslo (NO))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Monitoring of Grid services is essential to provide a smooth experience for users and provide fast and easy to understand diagnostics for administrators running the services. GangliARC makes use of the widely-used Ganglia monitoring tool to present web-based graphical metrics of the ARC computing element. These include statistics of running and finished jobs, data transfer metrics, as well as...
Ilija Vukotic
(Universite de Paris-Sud 11 (FR))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a...
Jorge Amando Molina-Perez
(Univ. of California San Diego (US))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The CMS offline computing system is composed of more than 50 sites and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors...
Ms
Vanessa Hamar
(CPPM-IN2P3-CNRS)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Parallel job execution in the grid environment using MPI technology presents a number of challenges for the sites providing this support. Multiple flavors of the MPI libraries, shared working directories required by certain applications, special settings for the batch systems make the MPI support difficult for the site managers. On the other hand the workload management systems with pilot jobs...
Mr
Fabio Hernandez
(IN2P3/CNRS Computing Centre & IHEP Computing Centre)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
By aggregating the storage capacity of hundreds of sites around the world, distributed data-processing platforms such as the LHC computing grid offer solutions for transporting, storing and processing massive amounts of experimental data, addressing the requirements of virtual organizations as a whole. However, from our perspective, individual workflows require a higher level of flexibility,...
Ivan Fedorko
(CERN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
In the last few years, new requirements have been received for visualization of monitoring data: advanced graphics, flexibility in configuration and decoupling of the presentation layer from the monitoring repository.
Lemonweb is the data visualization component of the LHC Era Monitoring (Lemon) system. Lemonweb consists of two sub-components: a data collector and a web visualization...
Mr
Massimo Sgaravatto
(Universita e INFN (IT))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The EU-funded project EMI, now at its second year, aims at providing a unified, standardized, easy to install software for distributed computing infrastructures.
CREAM is one of the middleware product part of the EMI middleware distribution:
it implements a Grid job management service which allows the submission, management and monitoring of computational jobs to local resource management...
Alessandro Di Girolamo
(CERN), Dr
Andrea Sciaba
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Since several years the LHC experiments rely on the WLCG Service Availability Monitoring framework (SAM) to run functional tests on their distributed computing systems. The SAM tests have become an essential tool to measure the reliability of the Grid infrastructure and to ensure reliable computing operations, both for the sites and the experiments.
Recently the old SAM framework was replaced...
Natalia Ratnikova
(KIT - Karlsruhe Institute of Technology (DE))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The CMS experiment has to move Petabytes of data among dozens of computing centres with low latency in order to make efficient use of its resources. Transfer operations are well established to achieve the desired level of throughput, but operators lack a system to identify early on transfers that will need manual intervention to reach completion.
File transfer latencies are sensitive to the...
Julien Leduc
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Newer generations of processors come with no increase in their clock frequency, and the same is true for memory chips. In order to achieve more performance, the core count is getting higher, and to feed all the cores on a chip with instructions and data, the number of memory channels must follow the same trend.
Non Uniform Memory Access (NUMA) architecture allowed the CPU manufacturers to...
Simon William Fayer
(Imperial College Sci., Tech. & Med. (GB)),
Stuart Wakefield
(Imperial College Sci., Tech. & Med. (GB))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Reading and writing data onto a disk based high capacity storage system has long been a troublesome task. While disks handle sequential reads and writes well, when they are interleaved performance drops off rapidly due to the time required to move the disk's read-write head(s) to a different position. An obvious solution to this problem is to replace the disks with an alternative storage...
Dr
Giuseppe Bagliesi
(INFN Sezione di Pisa)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
While the model for a Tier2 is well understood and implemented within the HEP Community, a refined design for Analysis specific sites has not been agreed upon as clearly. We aim to describe the solutions adopted at the INFN Pisa, the biggest Tier2 in the Italian HEP Community. A Standard Tier2 infrastructure is optimized for GRID CPU and Storage access, while a more interactive oriented use of...
Andreas Gellrich
(DESY)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
DESY is one of the largest WLCG Tier-2 centres for ATLAS, CMS and LHCb world-wide and the home of a number of global VOs. At the DESY-HH Grid site more than 20 VOs are supported by one common Grid infrastructure to allow for the opportunistic usage of federated resources. The VOs share roughly 4800 job slots in 800 physical CPUs of 400 hosts operated by a TORQUE/MAUI batch system.
On...
Gerardo GANIS
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
With advent of the analysis phase of LHC data-processing, interest in PROOF technology
has considerably increased. While setting up a simple PROOF cluster for basic usage is
reasonably straightforward, exploiting the several new functionalities added in recent
times may be complicated.
PEAC, standing for PROOF Enabled Analysis Cluster, is a set of tools aiming to facilitate
the setup...
Sam Skipsey
(University of Glasgow / GridPP)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
While, historically, Grid Storage Elements have relied on semi-proprietary protocols for data transfer (gridftp for site-to-site, and (rfio/dcap/other) for local transfers) ), the rest of the world has not stood still in providing its own solutions to data access.
dCache, DPM and StoRM all now support access via the widely implemented HTTP/WebDAV standard, and dCache and DPM both support...
José Flix
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. CMS experiment relies on File Transfer Services (FTS) for data distribution, a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and...
Stephen Gowdy
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The CERN Virtual Machine (CernVM) Software Appliance is a project developed in CERN with the goal of allowing the execution of the experiment's software on different operating systems in an easy way for the users. To achieve this it makes use of Virtual Machine images consisting of a JEOS (Just Enough Operational System) Linux image, bundled with CVMFS, a distributed file system for software....
Dr
Dirk Hoffmann
(CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France)
22/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
on behalf of the PLUME Technical Committee <http://projet-plume.org>"
for the PLUME abstract.
PLUME - FEATHER is a non-profit project created to Promote economicaL, Useful and Maintained softwarE For the Higher Education And THE Research communities. The site references software, mainly Free/Libre Open Source Software (FLOSS) from French universities and national research organisations,...
Vincent Garonne
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
This paper describes a user monitoring framework for very large data management systems that maintain high numbers of data movement transactions. The proposed framework prescribes a method for generating meaningful information from collected tracing data that allows the data management system to be queried on demand for specific user usage patterns in respect to source and destination...
Kati Lassila-Perini
(Helsinki Institute of Physics (FI))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The data collected by the LHC experiments are unique and present an opportunity and a challenge for a long-term preservation and re-use. The CMS experiment is defining a policy for the data preservation and access to its data and is starting the implementation of the policy. This note describes the driving principles of the policy and summarises the actions and activities which are planned for...
Mine Altunay
(Fermi National Accelerator Laboratory)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Identity management infrastructure has been a key work area for the Open
Science Grid (OSG) security team for the past year. The progress of web-based
authentication protocols such as openID, SAML, and scientific federations
such as InCommon, prompted OSG to evaluate its current identity management
infrastructure and propose ways to incorporate new protocols and methods.
For the couple...
Marko Petek
(Universidade do Estado do Rio de Janeiro (BR))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The motivation of this work is about the ongoing efforts to integrate the CMS Computing Model with a project of volunteer computing under development at CERN, the LHC@home, thus allowing the CMS Analysis jobs and Monte Carlo production activities to be executed on this paradigm that has a growing user base.
The LCH@home project allows the use of the CernVM (a virtual machine technology...
Alexey SEDOV
(Universitat Autònoma de Barcelona (ES))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
We present the prototype deployment of a private cloud at PIC and the tests performed in the context of providing a computing service for ATLAS. The prototype is based on the OpenNebula open source cloud computing solution. The possibility of using CernVM virtual machines as the standard for ATLAS cloud computing is evaluated by deploying a
Panda pilot agent as part of the VM...
Julia Andreeva
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The WLCG Transfer Dashboard is a monitoring system which aims to provide a global view of the WLCG data transfers and to reduce redundancy of monitoring tasks performed by the LHC experiments. The system is designed to work transparently across LHC experiments and across various technologies used for data transfer. Currently every LHC experiment monitors data transfers via experiment-specific...
Christopher Hollowell
(Brookhaven National Laboratory)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2000 hosts running Scientific Linux and Red Hat Enterprise Linux. ...
Paul Nilsson
(University of Texas at Arlington (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Production and Distributed Analysis system
(PanDA) in the ATLAS experiment uses pilots to execute submitted jobs on the worker nodes.
The pilots are designed to deal with different runtime conditions and failure scenarios, and support many storage systems.
This talk will give a brief overview of the PanDA pilot system and will present major features and recent improvements including...
Gavin Mccance
(CERN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future. There have been significant developments in the area of computer centre and configuration management tools over the last few years. CERN is examining how these modern, widely-used tools can improve the way in which we manage the centre, with a view to reducing the overall...
Alexander Moibenko
(Fermilab)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
By 2009 the Fermilab Mass Storage System had encountered several challenges:
1. The required amount of data stored and accessed in both tiers of the system (dCache and Enstore)had significantly increased.
2. The number of clients accessing Mass Storage System had also increased from tens to hundreds of nodes and from hundreds to thousands of parallel requests.
To address these...
Arne Wiebalck
(CERN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Serving more than 3 billion accesses per day, the
CERN AFS cell is one of the most active
installations in the world. Limited by overall
cost, the ever increasing demand for more space
and higher I/O rates drive an architectural
change from small high-end disks organised in fibre-channel fabrics towards external SAS based
storage units with large commodity drives. The
presentation...
Valerie Hendrix
(Lawrence Berkeley National Lab. (US))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited knowledge of the administration of such clusters can be strained by such maintenance...
Dr
Dimitri Bourilkov
(University of Florida (US))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
This paper reports the design and implementation of a secure, wide area network, distributed filesystem by the ExTENCI project, based on the Lustre filesystem. The system is used for remote access to analysis data from the CMS experiment at the Large Hadron Collider, and from the Lattice Quantum ChromoDynamics (LQCD) project. Security is provided by Kerberos authentication and authorization...
Mr
Pedro Manuel Rodrigues De Sousa Andrade
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Worldwide LHC Computing Grid (WLCG) infrastructure continuously operates thousands of grid services scattered around hundreds of sites. Participating sites are organized in regions and support several virtual organizations, thus creating a very complex and heterogeneous environment. The Service Availability Monitoring (SAM) framework is responsible for the monitoring of this...
Alessandro Di Girolamo
(CERN),
Fernando Harald Barreiro Megino
(CERN IT ES)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The LHC experiments' computing infrastructure is hosted in a distributed way across different computing centers in the Worldwide LHC Computing Grid and needs to run with high reliability. It is therefore crucial to offer a unified view to shifters, who generally are not experts in the services, and give them the ability to follow the status of resources and the health of critical systems in...
Alessandro De Salvo
(Universita e INFN, Roma I (IT))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The ATLAS Collaboration is managing one of the largest collections of software among the High Energy Physics Experiments. Traditionally this
software has been distributed via rpm or pacman packages, and has been installed in every site and user's machine, using more space than needed since the releases could not always share common binaries. As soon as the software has grown in size and...
Dr
Giacinto Donvito
(INFN-Bari)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Nowadays the storage systems are evolving not only in size but also in terms of used technologies. SSD disks are currently introduced in storage facilities for HEP experiments and their performance is tested in comparison with standard magnetic disks.
The tests are performed by running a real CMS data analysis for a typical use case and exploiting the features provided by PROOF-Lite, that...
Sebastien Ponce
(CERN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
This is an update on CASTOR (CERN Advanced Storage) describing the recent evolution and related experience in production during the latest high-intensity LHC runs.
In order to handle the increasing data rates (10GB/s average for 2011), several major improvements have been introduced.
We describe in particular the new scheduling system that has replaced the original CASTOR one. It removed the...
Dr
Andrei Tsaregorodtsev
(Universite d'Aix - Marseille II (FR))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The DIRAC Project was initiated to provide a data processing system for the LHCb Experiment at CERN. It provides all the necessary functionality and performance to satisfy the current and projected future requirements of the LHCb Computing Model. A considerable restructuring of the DIRAC software was undertaken in order to turn it into a general purpose framework for building distributed...
Dr
Tomas Linden
(Helsinki Institute of Physics (FI))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires...
Dr
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Open Science Grid (OSG) supports a diverse community of new and existing users to adopt and make effective use of the Distributed High Throughput Computing (DHTC) model. The LHC user community has deep local support within the experiments. For other smaller communities and individual users the OSG provides a suite of consulting and technical services through the User Support organization....
Fabrizio Furano
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Born in the context of EMI (European Middleware Initiative), the SYNCAT project considers as its main purpose the incremental reduction of the divergence of the content of remote file catalogues, like the ones represented by LFC, the Grid Storage Elements and the experiments' private databases.
Aiming at giving ways for these remote systems to interact transparently in order to keep their...
Stuart Purdie
(University of Glasgow)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Failure is endemic in the Grid world - as with any large, distributed computer system, at some point things will go wrong. Wether it is down to a problem with hardware, network or software, the shear size of a production Grid requires operation under the assumption that some of the jobs will fail. Some of those are anavoidable (e.g. network loss during data staging), some are preventable but...
Dr
Adam Lyon
(Fermilab)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Fermilab Intensity Frontier experiments like Minerva, NOvA, g-2 and Mu2e currently operate without an organized data handling system, relying instead on completely manual management of files on large central disk arrays at Fermilab. This model severely limits the computing resources that the experiments can leverage to those tied to the Fermilab site, prevents the use of coherent staging and...
Sam Skipsey
(University of Glasgow / GridPP)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The caching, http-mediated filesystem "CVMFS", while first developed for use with the Cern Virtual Machines project, has quickly become a significant part of several VOs software distribution policy, with ATLAS being particularly interested.
The benefits of CVMFS do not simply extend to large VOs, however; small virtual organisations can find software distribution to be problematic, as they...
German Cancio Melia
(CERN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
With currently around 55PB of data stored on over 49000 cartridges, and around 2PB of fresh data coming every month, CERN’s large tape infrastructure is continuing its growth. In this contribution, we will detail out the progress achieved and the ongoing steps towards our strategy of turning tape storage from a HSM environment into a sustainable long-term archiving solution. In particular, we...
Steven Murray
(CERN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The CERN Advanced STORage manager (CASTOR) is used to archive to tape the physics data of past and present physics experiments. Data is migrated (repacked) from older, lower density tapes to newer, high-density tapes approximately every two years to follow the evolution of tape technologies and to keep the volume occupied by the tape cartridges relatively stable. Improving the performance of...
Dr
Silvio Pardi
(INFN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1.
This luminosity translate in the...
Donato De Girolamo
(INFN),
Stefano Zani
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The monitoring and alert system is fundamental for the management and the operation
of the network in a large data center such as an LHC Tier-1.
The network of the INFN Tier-1 at CNAF is a multi-vendor environment: for its management
and monitoring several tools have been adopted and different sensors have been developed.
In this paper, after an overview on the different aspects to be...
Mr
Donato De Girolamo
(INFN-CNAF)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The monitoring and alert system is fundamental for the management and the operation
of the network in a large data center such as an LHC Tier-1.
The network of the INFN Tier-1 at CNAF is a multi-vendor environment: for its management
and monitoring several tools have been adopted and different sensors have been developed.
In this paper, after an overview on the different aspects to be...
Lorenzo RINALDI
(INFN CNAF (IT))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, involving many
Computing Centres spread around the world. The computing workload is managed by regional federations, called Clouds.
The Italian Cloud consists of a main (Tier-1) centre, located in Bologna, four secondary (Tier-2) centres, and a few smaller (Tier-3)...
Vincent Garonne
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The DDM Tracer Service is aimed to trace and monitor the atlas file operations on the Worldwide LHC Computing Grid. The volume of traces has increased significantly since the service started in 2009. Now there are about ~5 million trace messages every day and peaks of greater than 250Hz, with peak rates continuing to climb, which gives the current service structure a big challenge.
Analysis...
Fabrizio Furano
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
ATLAS decided to move from a globally distributed file catalogue to a central instance at CERN.
This talk describes the ATLAS LFC merge exercise from the analysis phase over the prototyping and stress testing to the final execution phase.
We demonstrate that with careful preparation even major architectural changes could be implemented while minimizing the impact on the experiments...
Mr
Igor Sfiligoi
(University of California San Diego)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
OSG has been operating for a few years at UCSD a glideinWMS factory for several scientific communities, including CMS analysis, HCC and GLOW. This setup worked fine, but it had become a single point of failure. OSG thus recently added another instance at Indiana University, serving the same user communities. Similarly, CMS has been operating a glidein factory dedicated to reprocessing...
Mr
Milosz Zdybal
(Institute of Nuclear Physics)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Providing computer infrastructure to end-users in an efficient and user-friendly way was always a big challenge in the IT market. “Cloud computing” is an approach that addresses these issues and recently it has been gaining more and more popularity. A well designed Cloud Computing system gives elasticity in resources allocation and allows for efficient usage of computing infrastructure. The...
Dr
Stefano Dal Pra
(INFN)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Keeping track of the layout of the informatic resources in a
big datacenter is a complex task.
DOCET is a database-based webtool designed and implemented at INFN. It
aims at providing a uniform interface to manage and retrieve needed
information about one or more datacentre, such as available hardware,
software and their status.
Having a suitable application is however useless until...
Andreas Haupt
(Deutsches Elektronen-Synchrotron (DE))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
DESY is one of the world-wide leading centers for research with particle accelerators, synchrotron light and astroparticles. DESY participates in LHC as a Tier-2 center, supports on-going analyzes of HERA data, is a leading partner for ILC, and runs the National Analysis Facility (NAF) for LHC and ILC in the framework of the Helmholtz Alliance, Physics at the Terascale. For the research with...
Dmitry Ozerov
(Deutsches Elektronen-Synchrotron (DE)),
Yves Kemp
(Deutsches Elektronen-Synchrotron (DE))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Since mid of 2010, the Scientific Computing department at DESY is operating a storage and data access evaluation laboratory, DESY Grid Lab, equipped with 256 CPU cores, and about 80 Tbytes of data distributed among 5 servers and interconnected via up to 10-GiGE links.
The system has been dimensioned to be equivalent to the size of a medium WLCG Tier 2 center to provide commonly exploitable...
Mr
Kazuhiro Terao
(MIT)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Double Chooz reactor anti-neutrino experiment have developed a automatised system for data streaming from the detector site to the different nodes of data analysis in Europe, Japan and USA. The system both propagates and triggers the processing of data as it goes through low level data analysis. All operations (propagation and processing) are tracked file-wise in real time using DB (MySQL...
Dr
Scott Teige
(Indiana University)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The Open Science Grid Operations (OSG) Team operates a distributed set of services and
tools that enable the utilization of the OSG by several HEP projects. Without these
services users of the OSG would not be able to run jobs, locate resources, obtain
information about the status of systems or generally use the OSG. For this reason these
services must be highly available. This paper...
Dr
Santiago Gonzalez De La Hoz
(Universidad de Valencia (ES))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Originally the ATLAS computing model assumed that the Tier2s of each of the 10
clouds keep on disk collectively at least one copy of all "active" AOD and DPD
datasets. Evolution of ATLAS computing and data models requires changes in
ATLAS Tier2s policy for the data replication, dynamic data caching and remote
data access.
Tier2 operations take place completely asynchronously with respect...
Mr
Stephan Zimmer
(OKC/ Stockholm University, on behalf the Fermi-LAT Collaboration)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Data Handling Pipeline ("Pipeline") has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the...
Jos Van Wezel
(KIT - Karlsruhe Institute of Technology (DE))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Resources of large computer centers used in physics computing today. are optimised for the WLCG framework and reflect the typical data access footprint of reconstruction and analysis. A traditional Tier 1 centre like GridKa at KIT hosts thousands of hosts and many PetaBytes of disk and tape storage that is used mostly by a single community. The required size as well as the intrinsic...
luca dell'agnello
(infn)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
INFN-CNAF is the central computing facility of INFN: it is the Italian Tier-1 for the experiments at LHC, but also one of the main Italian computing facilities of several other experiments such as BABAR, CDF, SuperB, Virgo, Argo, AMS, Pamela, MAGIC, Auger etc..
Currently there is an installed CPU capacity of 100,000 HS06, a net disk capacity of 9 PB and an equivalent amount of tape storage...
Andrej Filipcic
(Jozef Stefan Institute (SI))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC...
Dr
Tony Wildish
(Princeton University (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
PhEDEx is the data-transfer management solution written by CMS. It consists of agents running at each site, a website for presentation of information, and a web-based data-service for scripted access to information.
The website allows users to monitor the progress of data-transfers, the status of site agents and links between sites, and the overall status and behaviour of everything about...
Lionel Cons
(CERN),
Massimo Paladin
(Universita degli Studi di Udine)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The messaging service currently used by WLCG is operated by EGI and consists of four tightly coupled brokers running ActiveMQ and designed to host the Grid operational tools such as SAM.
This service is successfully being used by...
Daniele Andreotti
(Universita e INFN (IT)),
Gianni Dalla Torre
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The WNoDeS software framework (http://web.infn.it/wnodes) uses virtualization technologies to provide access to a common pool of dynamically allocated computing resources. WNoDeS can process batch and interactive requests, in local, Grid and Cloud environments.
A problem of resource allocation in Cloud environments is the time it takes to actually allocate the resource and make it...
Dr
Stuart Wakefield
(Imperial College London)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Parallel
We present the development and first experience of a new component (termed WorkQueue) in the CMS workload management system. This component provides a link between a global request system (Request Manager) and agents (WMAgents) which process requests at compute and storage resources (known as sites). These requests typically consist of creation or processing of a data sample (possibly...
Alessandra Forti
(University of Manchester (GB))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
In this paper we will describe primarily the experience of going through an EU procurement. We will describe what a PQQ (Pre-Qualification Questionaire) is and some of the requirments for vendors such as ITIL and PRINCE2 project management qualifications. We will describe how the technical part was written including requirements from the main users and the university logistic requirements to...
Georgiana Lavinia Darlea
(Polytechnic University of Bucharest (RO))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
In the ATLAS experiment the collection, processing, selection and conveyance of event data from the detector front-end electronics to mass storage is performed by the ATLAS online farm consisting of more than 3000 PCs with various characteristics. To assure the correct and optimal working conditions the whole online system must be constantly monitored. The monitoring system should be able to...
José Flix
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behavior of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The...
Dr
Peter Kreuzer
(RWTH Aachen)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
In the large LHC experiments the majority of computing resources are provided by the participating countries. These resource pledges account for more than three quarters of the total available computing. The experiments are asked to give indications of their requests three years in advance and to evolve these as the details and constraints become clearer. In this presentation we will discuss...
Alessandra Forti
(University of Manchester (GB))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
In this paper we will present the efforts carried out in the UK to fix the WAN transfers problem highlighted by the ATLAS sonar tests. We will present the work done at site level, the monitoring tools at local level on the machines (ifstat, tcpdump, netstat...), between sites (iperf) and at FTS level monitoring. We will describe the effort to setup a mini-mesh to simplify the sonar tests setup...
Georgiana Lavinia Darlea
(Polytechnic University of Bucharest (RO))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The ATLAS Online farm is a non-homogeneous cluster of more than 3000 PCs which run the data acquisition, trigger and control of the ATLAS detector. The systems are configured and monitored by a combination of open-source tools, such as Quattor and Nagios, and tools developed in-house, such as ConfDB.
We report on the ongoing introduction of new provisioning and configuration tools, Puppet...
Anders Waananen
(Niels Bohr Institute)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
Modern HEP related calculations have traditionally been beyond the capabilities of donated desktop machines, particularly because of complex deployment of the needed software.
The popularization of efficient virtual machine technology and in particular the CernVM appliance, that allows for only the needed subset of the ATLAS software environment to be dynamically downloaded, has made such...
Hassen Riahi
(Universita e INFN (IT))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Data storage and access represent the key of CPU-intensive and data-intensive high performance Grid computing. Hadoop is an open-source data processing framework that includes, fault-tolerant and scalable, distributed data processing model and execution environment, named MapReduce, and distributed file system, named Hadoop distributed file system (HDFS).
HDFS was deployed and tested within...
Dr
Dimitri Bourilkov
(University of Florida (US))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
We describe the work on creating system images of Lustre virtual clients in the ExTENCI project, using several virtual technologies (KVM, XEN, VMware). These virtual machines can be built at several levels, from a basic Linux installation (we use Scientific Linux 5 as an example), adding a Lustre client with Kerberos authentication, and up to complete clients including local or distributed...
Andrea Dotti
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
In this paper we present the Geant4 validation and testing suite.
The application is used to test any new Geant4 release. The simulation of a particularly demanding use-case (High Energy Physics calorimeters) is tested with different physics parameters.
The suite is integrated with a job submission system that allows for the generation of high statistics data-sets on distributed resources....
Andreas Gellrich
(DESY)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
Virtualization techniques have become a key topic in computing in the last years. In the Grid, discussions on the virtualization of worker nodes is most prominent. Currently, concepts for the provenience and sharing if images are under debate. The virtualization of Grid servers though is already a common and successful practice.
At DESY, one of the largest WLCG Tier-2 centres world-wide and...
William Strecker-Kellogg
(Brookhaven National Lab)
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
In this presentation we will address the development of a prototype virtualized worker node cluster, using Scientific Linux 6.x as a base
OS, KVM for virtualization, and the Condor batch software to manage virtual machines. The discussion provides details on our experiences
with building, configuring, and deploying the various components from bare metal, including the base OS, the...
Mikalai Kutouski
(Joint Inst. for Nuclear Research (JINR))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The current ATLAS Tier3 infrastructure consists of a variety of sites of different sizes and with a mix of local resource management systems (LRMS) and mass storage system (MSS) implementations. The Tier3 monitoring suite, having been developed in order to satisfy the needs of Tier3 site administrators and to aggregate Tier3 monitoring information on the global VO level, needs to be validated...
Alejandro Alvarez Ayllon
(University of Cadiz),
Ricardo Brito Da Rocha
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The Disk Pool Manager (DPM) and LCG File Catalog (LFC) are two grid data management components currently used in production at more than 240 sites. Together with a set of grid client tools they give the users a unified view of their data, hiding most details concerning data location and access.
Recently we've put a lot of effort in developing a reliable and high performance HTTP/WebDAV...
Ivano Giuseppe Talamo
(Universita e INFN, Roma I (IT))
22/05/2012, 13:30
Computer Facilities, Production Grids and Networking (track 4)
Poster
The LCG (Worldwide LHC Computing Grid) is a grid-based hyerarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very...
Danilo Dongiovanni
(INFN-CNAF, IGI)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
In production Grid infrastructures deploying EMI (European Middleware Initiative) middleware release, the Workload Management System (WMS) is the service responsible for the distribution of user tasks to the remote computing resources. Monitoring the reliability of this service, the job lifecycle and the workflow pattern generated by different user communities is an important and challenging...
Marco Cecchi
(Istituto Nazionale Fisica Nucleare (IT))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The EU-funded project EMI, now at its second year, aims at providing a unified, high quality middleware distribution for e-Science communities. Several aspects about workload management over diverse distributed computing environments are being challenged by the EMI roadmap: enabling seamless access to both HTC and HPC computing services, implementing a commonly agreed framework for the...
Lukasz Janyst
(CERN)
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
The XRootD server framework is becoming increasingly popular in the HEP community and beyond due to its simplicity, scalability and capability to construct distributed storage federations. With the growing adoption and new use cases emerging, it has become clear that the XRootD client code has reached a stage, where a significant refactoring of the code base is necessary to remove, by now,...
Matevz Tadel
(Univ. of California San Diego (US))
22/05/2012, 13:30
Distributed Processing and Analysis on Grids and Clouds (track 3)
Poster
During spring and summer 2011 CMS deployed Xrootd front-end servers on all US T1 and T2 sites. This allows for remote access to all experiment data and is used for user-analysis, visualization, running of jobs at T2s and T3s when data is not available at local sites, and as a fail-over mechanism for data-access in CMSSW jobs.
Monitoring of Xrootd infrastructure is implemented on three...
Dr
Thomas Mc Cauley
(Fermi National Accelerator Lab. (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The line between native and web applications is becoming increasingly blurred as modern web browsers are becoming powerful platforms on which applications can be run. Such applications are trivial to install and are readily extensible and easy to use. In an educational setting, web applications permit a way to rapidly deploy tools in a highly-restrictive computing environment.
The I2U2...
Niko Neufeld
(CERN),
Vijay Kartik Subbiah
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
This contribution describes the design and development of a fully software-based Online test-bench for LHCb. The current “Full Experiment System Test” (FEST) is a programmable data injector with a test setup that runs using a simulated data acquisition (DAQ) chain. FEST is heavily used in LHCb by different groups, and thus the motivation for complete software emulation of the test-bench is to...
Mr
Gero Müller
(III. Physikalisches Institut A, RWTH Aachen University, Germany)
24/05/2012, 13:30
Event Processing (track 2)
Poster
To understand in detail cosmic magnetic fields and sources of Ultra High Energy Cosmic Rays (UHECRs) we have developed a Monte Carlo simulation for galactic and extragalactic propagation.
In our approach we identify three different propagation regimes for UHECRs, the Milky Way, the local universe out to 110 Mpc, and the distant universe.
For deflections caused by the Galactic magnetic field...
Mr
Matej Batic
(Jozef Stefan Institute)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The Statistical Toolkit is an open source system specialized in the statistical comparison of distributions. It addresses requirements common to different experimental domains, such as simulation validation (e.g. comparison of experimental and simulated distributions), regression testing in software development and detector performance monitoring.
The first development cycles concerned the...
Dr
Isidro Gonzalez Caballero
(Universidad de Oviedo (ES))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The analysis of the complex LHC data usually follows a standard path that aims at minimizing not only the amount of data but also the number of observables used. After a number of steps of slimming and skimming the data, the remaining few terabytes of ROOT files hold a selection of the events and a flat structure for the variables needed that can be more easily inspected and traversed in the...
Chris Bee
(Universite d'Aix - Marseille II (FR))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We...
Mario Lassnig
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The ATLAS Distributed Data Management system requires accounting of its contents at the metadata layer. This presents a hard problem
due to the large scale of the system and the high rate of concurrent modifications of data. The system must efficiently account more than 80PB of disk and tape that store upwards of
500 million files across 100 sites globally.
In this work a generic accounting...
Luis Ignacio Lopera Gonzalez
(Universidad de los Andes (CO))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Since 2009 when the LHC came back to active service, the Data Quality Monitoring (DQM) team was faced with the need to homogenize and automate operations across all the different environments within which DQM is used for data certification.
The main goal of automation is to reduce operator intervention at the minimum possible level, especially in the area of DQM files management, where...
Hee Seo
(Hanyang Univ.)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Physics data libraries play an important role in Monte Carlo simulation systems: they provide fundamental atomic and nuclear parameters, and tabulations of basic physics quantities (cross sections, correction factors, secondary particle spectra etc.) for particle transport.
This report summarizes recent efforts for the improvement of the accuracy of physics data libraries, concerning two...
Ombretta Pinazza
(Universita e INFN (IT))
24/05/2012, 13:30
Online Computing (track 1)
Poster
ALICE is one of the four main experiments at the CERN Large Hadron Collider (LHC) in Geneva.
The Alice Detector Control System (DCS) is responsible for the operation and monitoring of the 18 detectors of the experiment and of central systems, for collecting and managing alarms, data and commands. Furthermore, it is the central tool to monitor and verify the beam mode and conditions in order...
Joerg Behr
(Deutsches Elektronen-Synchrotron (DE))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The CMS all-silicon tracker consists of 16588 modules. Therefore its alignment procedures require sophisticated algorithms. Advanced tools of computing, tracking and data analysis have been deployed for reaching the targeted performance. Ultimate local precision is now achieved by the determination of sensor curvatures, challenging the algorithms to determine about 200k parameters...
Matthew Littlefield
(Brunel University)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The Mice Analysis User Software (MAUS) for the Muon Ionisation Cooling Experiment (MICE) is a new simulation and analysis framework based
on best-practice software design methodologies. It replaces G4MICE as it offers new functionality and incorporates an improved design structure. A
new and effective control and management system has been created for handling the simulation geometry within...
Luca dell'Agnello
(INFN-CNAF)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
An automated virtual test environment is a way to improve testing, validation and verification activities when several deployment scenarios must be considered. Such solution has been designed and developed at INFN CNAF to improve software development life cycle and to optimize the
deployment of a new software release (sometimes delayed for the difficulties
met during the installation and...
Dr
John Harvey
(CERN)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The PH/SFT group at CERN is responsible for developing, releasing and deploying some of the software packages used in the data processing systems of CERN experiments, in particular those at the LHC. They include ROOT, GEANT4, CernVM, Generator Services, and Multi-core R&D (http://sftweb.cern.ch/). We have already submitted a number of abstracts for oral presentations at the conference. Here we...
Dr
Jack Cranshaw
(Argonne National Laboratory (US))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The ATLAS event-level metadata infrastructure supports applications that range from data quality monitoring, anomaly detection, and fast physics monitoring to event-level selection and navigation to file-resident event data at any processing stage, from raw through analysis object data, in globally distributed analysis. A central component of the infrastructure is a distributed TAG database,...
Markus Frank
(CERN)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
The LHCb collaboration consists of roughly 700 physicists from 52 institutes and universities. Most of the collaborating physicists - including subdetector experts - are not permanently based at CERN. This paper describes the architecture used to publish data internal to the LHCb experiment control- and data acquisition system to the world wide web. Collaborators can access the online...
Dr
Domenico Giordano
(CERN)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The conversion of photons into electron-positron pairs in the detector material is a nuisance in the event reconstruction of high energy physics experiments, since the measurement of the electromagnetic component of interaction products results degraded. Nonetheless this unavoidable detector effect can be also extremely useful. The reconstruction of photon conversions can be used to probe the...
Jochen Meyer
(Bayerische Julius Max. Universitaet Wuerzburg (DE))
24/05/2012, 13:30
Event Processing (track 2)
Poster
Accurate and detailed descriptions of the HEP detectors are turning out to be crucial elements of the software chains used for simulation, visualization and reconstruction programs: for this reason, it is of paramount importance to dispose of and to deploy generic detector description tools which allow for precise modeling, visualization, visual debugging and interactivity and which can be...
Daniela Remenska
(NIKHEF (NL))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
DIRAC is the Grid solution designed to support LHCb production activities as well as user data analysis. Based on a service-oriented architecture, DIRAC consists of many cooperating distributed services and agents delivering the workload to the Grid resources. Services accept requests from agents and running jobs, while agents run as light-weight components, fulfilling specific goals. Services...
Julia Grebenyuk
(DESY)
24/05/2012, 13:30
Event Processing (track 2)
Poster
A many-parameter fit to extract the the proton structure functions from the Neutral Current deep-inelastic scattering cross sections, measured from the data collected at HERA ep-collider with the ZEUS detector, will be presented. The structure functions F_2 and F_L are extracted as a function of Bjorken-x in bins of virtuality Q2. The fit is performed with the Bayesian Analysis Toolkit (BAT)...
Gennadiy Lukhanin
(Fermi National Accelerator Lab. (US)),
Martin Frank
(UVA)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
In the NOvA experiment, the Detector Controls System (DCS) provides a method for controlling and monitoring important detector hardware and environmental parameters. It is essential for operating the detector and is required to have access to roughly 370,000 independent programmable channels via more than 11,600 physical devices.
In this paper, we demonstrate an application of Control...
Prof.
Swain John
(Noreastern University)
24/05/2012, 13:30
Online Computing (track 1)
Poster
Modern particle physics experiments use short pieces of
code called ``triggers'' in order to make rapid decisions about whether incoming
data represents potentially interesting physics or not. Such decisions are
irreversible and while it is extremely important that they are
made correctly, little use has been made in the community of formal verification
methodology.
The goal of this...
Andrea Bocci
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented using FPGA and custom ASIC technology, and the High Level Trigger (HLT), implemented running a streamlined version of the CMS offline reconstruction software on a cluster of commercial rack-mounted computers, comprising thousands of CPUs.
The design of a software trigger system requires a...
Pauline Bernat
(University College London (UK))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The rising instantaneous luminosity of the LHC poses an increasing challenge to the pattern recognition algorithms for track reconstruction at the ATLAS Inner Detector Trigger. We will present the performance of these algorithms in terms of signal efficiency, fake tracks and execution time, as a function of the number of proton-proton collisions per bunch-crossing, in 2011 data and in...
Luiz Fernando Cagiano Parodi De Frias
(Univ. Federal do Rio de Janeiro (BR))
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
In 2010, the LHC experiment produced 7 TeV and heavy-ions collisions continually, generating a huge amount of data, which was analyzed and reported throughout several performed studies. Since then, physicists are bringing out papers and conference notes announcing results and achievements. During 2010, 37 papers and 102 conference notes were published and until September 2011 there are already...
Steven Andrew Farrell
(Department of Physics)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The ATLAS data quality software infrastructure provides tools for prompt investigation of and feedback on collected data and propagation of these results to analysis users. Both manual and automatic inputs are used in this system. In 2011, we upgraded our framework to record
all issues affecting the quality of the data in a manner which allows users to extract as much information (of the...
Grigori Rybkin
(Universite de Paris-Sud 11 (FR))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Software packaging is indispensable part of build and prerequisite for deployment processes. Full ATLAS software stack consists of TDAQ, HLT, and Offline software. These software groups depend on some 80 external software packages. We present tools, package PackDist, developed and used to package all this software except for TDAQ project. PackDist is based on and driven by CMT, ATLAS software...
Steven Goldfarb
(University of Michigan (US))
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
The newfound ability of Social Media to transform public communication back to a conversational nature provides HEP with a powerful tool for Outreach and Communication. By far, the most effective component of nearly any visit or public event is that fact that the students, teachers, media, and members of the public have a chance to meet and converse with real scientists.
While more than...
Jochen Ulrich
(Johann-Wolfgang-Goethe Univ. (DE))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The High-Level-Trigger (HLT) cluster of the ALICE experiment is a computer cluster with about 200 nodes and 20 infrastructure machines. In its current state, the cluster consists of nearly 10 different configurations of nodes in terms of installed hardware, software and network structure. In such a heterogeneous environment with a distributed application, information about the actual...
Pierrick Hanlet
(Illinois Institute of Technology)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The Muon Ionization Cooling Experiment (MICE) is a demonstration experiment to prove the feasibility of cooling a beam of muons for use in a Neutrino Factory and/or Muon Collider. The MICE cooling channel is a section of a modified Study II cooling channel which will provide a 10% reduction in beam emittance. In order to ensure a reliable measurement, MICE will measure the beam emittance...
Alexander Oh
(University of Manchester (GB))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The online event selection is crucial to reject most of the events containing uninteresting background collisions while preserving as much as possible the interesting physical signals. The b-jet selection is part of the trigger strategy of the ATLAS experiment and a set of dedicated triggers is in place from the beginning of the 2011 data-taking period and is contributing to keep the total...
Marius Tudor Morar
(University of Manchester (GB))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ATLAS High Level Trigger (HLT) is organized in two trigger levels running different selection algorithms on heterogeneous farms composed of off-the-shelf processing units. The processing units have varying computing power and can be integrated using diverse network connectivity. The ATLAS working conditions are changing mainly due to the constant increase of the LHC instantaneous...
Dr
Daniel Kollar
(Max-Planck-Institut fuer Physik, Munich)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The main goals of data analysis are to infer the parameters of
models from data, to draw conclusions on the validity of models,
and to compare their predictions allowing to select the most
appropriate model.
The Bayesian Analysis Toolkit, BAT, is a tool developed to evaluate
the posterior probability distribution for models and their
parameters. It is centered around Bayes' Theorem and...
Prof.
Kihyeon Cho
(KISTI)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
In order to search for new physics beyond the standard model, the next generation of B-factory experiment, Belle II will collect a huge data sample that is a challenge for computing systems. The Belle II experiment, which should commence data collection in 2015, expects data rates 50 times higher than that of Belle. In order to handle this amount of data, we need a new data handling system...
Soohyung Lee
(Korea University)
24/05/2012, 13:30
Online Computing (track 1)
Poster
A next generation B-factory experiment, Belle II, is now being constructed at KEK in Japan. The upgraded accelerator SuperKEKB is designed to have the maximum luminosity of 8 × 10^35 cm^−2s^−1 that is a factor of 40 higher than the current world record. As a consequence, the Belle II detector yields a data stream of the event size ~1 MB at a Level 1 rate of 30 kHz.
The Belle II High Level...
Benedikt Hegner
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Bug tracking is a process which comprises activities of reporting, documenting, reviewing, planning, and fixing software bugs. While there exist many studies on the usage of bug tracking tools and procedures in open source software, the situation in high energy physics has never been looked at in a systematic way. In our study we have compared and analyzed several scientific and...
Karol Hennessy
(Liverpool)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The LHCb experiment is dedicated to searching for New Physics effects in the
heavy flavour sector, precise measurements of CP violation and rare heavy
meson decays. Precise tracking and vertexing around the interaction point
is crucial in achieving these physics goals.
The LHCb VELO (VErtex LOcator) silicon micro-strip detector is the highest
precision vertex detector at the LHC and is...
Dr
Shengsen Sun
(Institute of High Energy Physics Chinese Academy of Scinences)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The BESIII TOF detector system based on plastic scintillation counters consists of a double layer barrel and two single layer end caps. With the time calibration, the double-layer barrel TOF achieved 78ps time resolution for electrons, and end cap is about 110ps for muons. The attenuation length, effective velocity calibrations and TOF reconstruction are also described. The Kalman filter...
Marek Domaracky
(CERN)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
Over the last few years, we have seen the broadcast industry moving to mobile devices and to the broadband Internet delivering HD quality. To keep up with the trends, we deployed a new streaming infrastructure. We are now delivering live and on-demand video to all major platforms like Windows, Linux, Mac, iOS and Android running on PC, Smart Phone, Tablet or TV.
To optimize the viewing...
Mariusz Piorkowski
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Oracle-based database applications underpin many key aspects of operations for both the LHC accelerator and the LHC experiments. In addition to overall performance, predictability of response is a key requirement to ensure smooth operations—and delivering predictability requires understanding the applications from the ground up. Fortunately, the Oracle database management system provides...
Frank-Dieter Gaede
(Deutsches Elektronen-Synchrotron (DE))
24/05/2012, 13:30
Event Processing (track 2)
Poster
ILD is a proposed detector concept for a future linear collider, that envisages a Time Projection Chamber (TPC) as the central tracking detector. The ILD TPC will have a large number of voxels that have dimensions that are small compared to the typical distances between charged particle tracks. This allows for the application of simple nearest neighbor type clustering algorithms to find clean...
Evaldas Juska
(Fermi National Accelerator Lab. (US))
24/05/2012, 13:30
Online Computing (track 1)
Poster
Cathode strip chambers (CSC) compose the endcap muon system of the CMS experiment at the LHC. Two years of data taking have proven that various online systems like Detector Control System (DCS), Data Quality Monitoring (DQM), Trigger, Data Acquisition (DAQ) and other specialized applications are doing their task very well. But the need for better integration between these systems is starting...
Kaori Maeshima
(Fermi National Accelerator Lab. (US))
24/05/2012, 13:30
Online Computing (track 1)
Poster
In operating a complex high energy physics experiment such as CMS, two of the important issues are to record high quality data as efficiently as possible and, correspondingly, to have well validated and certified data in a timely manner for physics analyses. Integrated and user-friendly monitoring systems and coherent information flow play an important role to accomplish this. The CMS...
Giacomo Sguazzoni
(Universita e INFN (IT))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The CMS tracking code is organized in several levels, known as 'iterative steps', each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and...
Sunanda Banerjee
(Saha Institute of Nuclear Physics (IN))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The CMS simulation, based on the Geant4 toolkit, has been operational within the new CMS software framework for more than four years. The description of the detector including the forward regions has been completed and detailed investigation of detector positioning and material budget has been carried out using collision data. Detailed modelling of detector noise has been performed and...
Dirk Hufnagel
(Fermi National Accelerator Lab. (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It is responsible for the first processing steps of data from the CMS Experiment at CERN. This talk covers the complete overhaul (rewrite) of the system for the 2012 run, to bring it into line with the new CMS Workload Management system, improving scalability and maintainability for the next few years.
Elizabeth Gallas
(University of Oxford (GB))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
In the ATLAS experiment, database systems generally store the bulk of conditions and configuration data needed by event-wise reconstruction and analysis jobs. These systems can be relatively large stores of information, organized and indexed primarily to store all information required for system-specific use cases and efficiently deliver
the required information to event-based...
Andrea Bocci
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented using FPGA and custom ASIC technology, and the High Level Trigger (HLT), implemented running a streamlined version of the CMS offline reconstruction software on a cluster of commercial rack-mounted computers, comprising thousands of CPUs.
The CMS software is written mostly in C++, using...
Dr
Martin Purschke
(BROOKHAVEN NATIONAL LABORATORY)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The PHENIX detector system at the Relativistic Heavy Ion Collider (RHIC) was one of the first experiments getting to "LHC-era" data rates in excess of 500 MB/s of compressed data in 2004. In step with new detectors and increasing event sizes and rates, the data logging capability has grown to about 1500MB/s since then.
We will explain the strategies we employ to cope with the data volumes...
Jorn Adamczewski-Musch
(GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The Compressed Baryonic Matter (CBM) experiment is intended to run at the FAIR facility that is currently being build at GSI in Darmstadt, Germany. For testing of future CBM detector and readout electronics prototypes, several test beamtimes have been performed at different locations, such as GSI, COSY, and CERN PS.
The DAQ software has to treat various data inputs, e.g. standard VME modules...
Manuel Giffels
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The Data Bookkeeping Service 3 (DBS 3) provides an improved event data catalog for Monte Carlo and recorded data of the CMS (Compact Muon Solenoid) experiment at the Large Hadron Collider (LHC). It provides the necessary information used for tracking datasets, like data processing history, files and runs associated with a given dataset on a scale of about 10^5 datasets and more than 10^7...
Matthias Richter
(University of Oslo (NO))
24/05/2012, 13:30
Online Computing (track 1)
Poster
High resolution detectors in high energy nuclear physics deliver a huge
amount of data which is often a challenge for the data acquisition and
mass storage. Lossless compression techniques on the level of the raw
data can provide compression ratios up to a factor of 2. In ALICE, an
effective compression factor of >5 for the Time Projection Chamber (TPC)
is needed to reach an overall...
Charilaos Tsarouchas
(National Technical Univ. of Athens (GR))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The ATLAS experiment at CERN is one of the four Large Hadron Collider ex- periments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parame- ters, the propagation of the alarms and the archiving of important operational data in a relational database. DCS Data Viewer (DDV) is an application that provides access...
Yu.nakahama Higuchi
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS detector has an average recording rate of about 300 Hz. To reduce the rate of events but still a maintain high efficiency of selecting rare events such as Higgs Boson decays, a three-level trigger system is used in ATLAS. Events are selected based on physics signatures such as events with energetic leptons,...
Mantas Stankevicius
(Vilnius University (LT))
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
CMSSW (CMS SoftWare) is the overall collection of software and services needed by the simulation, calibration and alignment, and reconstruction modules that process data so that physicists can perform their analysie. It is a long term project, with a large amount of source code. In large scale and complex projects is important to have as up-to-date and automated software documentation as...
Andrea Petrucci
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The Error and Alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN is successfully used for the physics runs at Large Hadron Collider (LHC) during the first three years of activities. Error and alarm processing entails the notification, collection, store and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a...
Ms
Chang Pi-Jung
(Kansas University)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The Double Chooz experiment will measure reactor antineutrino flux from two detectors with a relative normalization uncertainty less than 0.6%. The Double Chooz physical environment monitoring system records conditions of the experiment's environment to ensure the stability of the active volume and readout electronics. The system monitors temperatures in the detector liquids, temperatures and...
Tomasz Wolak
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The development and distribution of Grid middleware software projects, as large, complex, distributed systems require a sizeable computing infrastructure for each stage of the software process: for instance pools of machines for building, and testing on several platforms. Software testing and the possibility of implementing realistic scenarios for the verification of grid middleware are a...
Semen Lebedev
(GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and...
Dr
Dmitry Litvintsev
(Fermilab)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Enstore is a mass storage system developed by Fermilab that provides distributed access and management of the data stored on tapes. It uses namespace service, pnfs, developed by DESY to provide filesystem-like
view of the stored data. Pnfs is a legacy product and is being replaced by a new implementation, called Chimera, which is also developed by DESY. Chimera namespace offers multiple...
Igor Oya
(Institut für Physik, Humboldt-Universität zu Berlin, Newtonstrasse 15, D-12489 Berlin, Germany)
24/05/2012, 13:30
Online Computing (track 1)
Poster
CTA (Cherenkov Telescope Array) is one of the largest ground-based astronomy projects being pursued and will be the largest facility for ground-based gamma-ray observations ever built. CTA will consist of two arrays (one in the Northern hemisphere and one in the Southern hemisphere) composed of several different sizes of telescopes. A prototype for the Medium Size Telescope (MST) type of a...
Liam Duguid
(University of London (GB))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The electron and photon triggers are among the most widely used triggers in ATLAS physics analyses. In 2011, the increasing luminosity and pile-up conditions demanded higher and higher thresholds and the use of tighter and tighter selections for the electron triggers. Optimizations were performed at all three levels of the ATLAS trigger system. At the high-level trigger (HLT), many variables...
John Haggerty
(Brookhaven National Laboratory)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The architecture of the PHENIX data acquisition system will be reviewed, and how it has evolved in 12 years of operation. Custom data acquisition hardware front end modules embedded in the detector operated in a largely inaccessible experimental hall have been controlled and monitored, and a large software infrastructure has been developed around remote objects which are controlled from a...
Dr
Alexander Undrus
(Brookhaven National Laboratory (US))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing...
Alvaro Gonzalez Alvarez
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
In 2002, the first central CERN service for version control based on CVS was set up. Since then, three different services based on CVS and SVN have been launched and run in parallel; there are user requests for another service based on git. In order to ensure that the most demanded services are of high quality in terms of performance and reliability, services in less demand had to be shut...
Marius Tudor Morar
(University of Manchester (GB))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ATLAS experiment is observing proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted rate of several hundred Hz, for an average event size of ~1.2 MB.
This paper focuses on the TDAQ...
Norman Anthony Graf
(SLAC National Accelerator Laboratory (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in...
Wolfgang Lukas
(University of Innsbruck (AT))
24/05/2012, 13:30
Event Processing (track 2)
Poster
We present the ATLAS simulation packages ATLFAST-II and ISF.
Atlfast-II is a sophisticated fast parametrized simulation in the Calorimeter system in combination with full Geant4 simulation precision in the Inner Detector and Muon Systems. This combination offers a relative increase in speed of around a factor of ten compared to the standard ATLAS detector simulation and is being used to...
Rahmat Rahmat
(University of Mississippi (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
A framework for Fast Simulation of particle interactions in the CMS detector has been developed and implemented in the overall simulation, reconstruction and analysis framework of CMS. It produces data samples in the same format as the one used by the Geant4-based (henceforth Full) Simulation and Reconstruction chain; the output of the Fast Simulation of CMS can therefore be used in the...
Gennaro Tortone
(INFN Napoli)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The FAZIA project groups together several institutions in Nuclear Physics,
which are working in the domain of heavy-ion induced reactions around and below
the Fermi energy. The aim of the project is to build a 4Pi array for charged particles,
with high granularity and good energy resolution, with A and Z identification capability
over the widest possible range.
It will use the...
Alfonso Boiano
(INFN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
FAZIA stands for the Four Pi A and Z Identification Array. This is a
project which aims at building a new 4pi particle detector for
charged particles. It will operate in the domain of heavy-ion induced
reactions around the Fermi energy. It puts together several international
institutions in Nuclear Physics.
It is planned to be operating with both stable and radioactive nuclear
beams. A...
Ms
Heather Kelly
(SLAC National Accelerator Laboratory)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The Fermi Gamma-ray Observatory, including the Large Area Telescope (LAT), was launched June 11, 2008. We are a relatively small collaboration, with a maximum of 25 software developers in our heyday. Within the LAT collaboration we support Redhat Linux, Windows, and are moving towards Mac OS as well for offline simulation, reconstruction and analysis tools. Early on it was decided to use...
Elizabeth Gallas
(University of Oxford (GB))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. In this paper we will discuss the primary use of AMI which is to provide a catalogue of datasets (file collections) which is searchable using...
Dinesh Ram
(Johann-Wolfgang-Goethe Univ. (DE))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ALICE High-Level Trigger (HLT) is a complex real-time system, whose primary objective is to scale down the data volume read out by the ALICE detectors to at most 4 GB/sec before being written to permanent storage. This can be achieved by using a combination of event filtering, selection of the physics regions of interest and data compression, based on detailed on-line event reconstruction....
Francisca Garay Walls
(University of Edinburgh (GB))
24/05/2012, 13:30
Event Processing (track 2)
Poster
An overview of the current status of electromagnetic physics (EM) of the Geant4 toolkit is presented. Recent improvements are focused on the performance of large scale production for LHC and on the precision of simulation results over a wide energy range. Significant efforts have been made to improve the accuracy and CPU speed for EM particle transport. New biasing options available for Geant4...
Mr
Laurent Garnier
(LAL-IN2P3-CNRS)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
New developments on visualization drivers in Geant4 software toolkit
Dr
Sebastien Binet
(LAL/IN2P3)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Current HENP libraries and frameworks were written before multicore
systems became widely deployed and used.
From this environment, a 'single-thread' processing model naturally
emerged but the implicit assumptions it encouraged are greatly
impairing our abilities to scale in a multicore/manycore world.
Writing scalable code in C++ for multicore architectures, while
doable, is no...
Jacob Russell Howard
(University of Oxford (GB))
24/05/2012, 13:30
Online Computing (track 1)
Poster
One possible option for the ATLAS High-Level Trigger (HLT) upgrade for higher
LHC luminosity is to use GPU-accelerated event processing. In this talk we
discuss parallel data preparation and track finding algorithms specifically
designed to run on GPUs. We present a "client-server" solution for hybrid CPU/GPU
event reconstruction which allows for the simple and flexible integration of...
Dr
Andrea Valassi
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier/Squid and...
Dr
Giovanni Polese
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The CMS detector control system (DCS) is responsible for controlling and monitoring the detector status and for the operation of all CMS sub detectors and infrastructure. This is required to ensure safe and efficient data taking, so that high quality physics data can be recorded. The current system architecture is composed of more than 100 servers, in order to provide the required processing...
Takeo Higuchi
(KEK)
24/05/2012, 13:30
Online Computing (track 1)
Poster
We present performance study of a high-speed RocketIO receiver card
implemented as PCI-express device intended for the use in future
luminosity-frontier HEP experiment.
To search for a new physics beyond the Standard Model, we start
Belle II experiment from 2015 in KEK, Japan. In Belle II, the
detector signals are digitized in or nearby the detector complex, and
the digitized signals...
Dr
Giuseppe Avolio
(University of California Irvine (US))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 2000...
Mr
Ivan BELYAEV
(ITEP/MOSCOW)
24/05/2012, 13:30
Event Processing (track 2)
Poster
A hybrid C++/Python environment built from the standard components is being heavily and successfully used in LHCb, both for off-line physics analysis as well as for the High Level Trigger. The approach is based on the LoKi toolkit and the Bender analysis framework. A small set of highly configurable C++ components allows
to describe the most frequirent analysis tasks, e.g. combining and...
Jonathan Bouchet
(Kent State University)
24/05/2012, 13:30
Event Processing (track 2)
Poster
Due to their production at the early stages, heavy flavor particles are of interest to study the properties of the matter created in heavy ion collisions at RHIC.
Previous measurements of $D$ and $B$ mesons at RHIC[1, 2] using semi-leptonic probes show a suppression similar to that of light quarks, which is in contradiction with theoretical models only including gluon radiative energy loss...
Dr
Douglas Smith
(SLAC National Accelerator Lab.)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The BaBar high energy physics experiment acquired data from 1999 until 2008. Soon after the end of data taking, the effort to produce the final dataset started. This final dataset contains over 11x10^9 events, in 1.6x10^6 files, over a petabyte of storage. The Long Term Data Access (LTDA) project aims at the preservation of the BaBar data, analysis tools and documentation to ensure the...
Mr
Igor Mandrichenko
(Fermilab)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Neutrino physics research is an important part of FNAL scientific program
in post Tevatron era. Neutrino experiments are taking
advantage of high beam intensity delivered by the FNAL accelerator
complex. These experiments share a common
beam infrastructure, and require detailed information about the operation
of the beam to perform their measurements. We have designed and
implemented a...
Prof.
Ryosuke ITOH
(KEK)
24/05/2012, 13:30
Event Processing (track 2)
Poster
Recent PC servers are equipped with multi-core CPUs and it is desired to utilize the full processing power of them for the data analysis in large scale HEP experiments. A software framework ``basf2'' is being developed for the use in the Belle II experiment, an upgraded B-factory experiment at KEK, and the parallel event processing is in its design. The framework accepts a set of plug-in...
Dr
Julius Hrivnac
(Universite de Paris-Sud 11 (FR))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The possible implementation of parallel algorithms will be described.
- The functionality will be demonstrated using Swarm - a new experimental interactive parallel framework.
- The access from several parallel-friendly scripting languages will be shown.
- The benchmarks of the typical tasks used in High Energy Physics code will be provided.
The talk will concentrate on using the "Fork and...
Mr
Philippe Canal
(FERMILAB)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
In the past year, the development of ROOT I/O has focused on improving the existing code and increasing the collaboration with the experiments' experts. Regular I/O workshops have been held to share and build upon the varied experiences and points of view. The resulting improvements in ROOT I/O span many dimensions including reduction and more control over the memory usage, drastic reduction...
Dr
John Apostolakis
(CERN),
Xin Dong
(Northeastern University)
24/05/2012, 13:30
Event Processing (track 2)
Poster
We report on the progress of the multi-core versions of Geant4, including multi-process and multi-threaded Geant4.
The performance of the multi-threaded version of Geant4 has been measured, identifying an overhead compared with the sequential version of 20-30%. We explain the reasons, and the improvements introduced to reduce this overhead.
In addition we have improved the design of a...
Irina Sourikova
(Brookhaven National Laboratory)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
During its 20 years of R&D, construction and operation the Phenix experiment at RHIC has accumulated large amounts of proprietary collaboration data that is hosted
on many servers around the world and is not open for commercial search engines for indexing and searching.The legacy search infrastructure did not
scale well with the fast growing Phenix document
base and produced results...
Danilo Dongiovanni
(INFN),
Doina Cristina Aiftimiei
(Istituto Nazionale Fisica Nucleare (IT))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
What is an EMI Release? What is its life-cycle? How is its quality assured through a continuous integration and large scale acceptance testing? These are the main questions that this article will answer, by presenting the EMI release management process with emphasis on the role played by the Testing Infrastructure in improving the quality of the middleware provided by the project.
The...
Simon William Fayer
(Imperial College Sci., Tech. & Med. (GB)),
Stuart Wakefield
(Imperial College Sci., Tech. & Med. (GB))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The density of rack-mount computers is continually increasing, allowing for higher performance processing in smaller and smaller spaces. With the introduction of its new Bulldozer micro-architecture, AMD have made it feasible to run up to 128 cores within a 2U rack-mount space. CPUs based on Bulldozer contain a series of modules, each module containing two processing cores which share some...
Igor Kulakov
(GSI)
24/05/2012, 13:30
Event Processing (track 2)
Poster
Search for particle trajectories is a basis of the on-line event reconstruction in the heavy-ion CBM experiment (FAIR/GSI, Darmstadt, Germany). The experimental requirements are very high, namely: up to 10^7 collisions per second, up to 1000 charged particles produced in a central collision, a non-homogeneous magnetic field, about 85% of the additional background combinatorial measurements in...
Dr
Thomas Mc Cauley
(Fermi National Accelerator Lab. (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
iSpy is a general-purpose event data and detector visualization program that was developed as an event display for the CMS experiment at the LHC and has seen use by the general public and teachers and students in the context of education and outreach.
Central to the iSpy design philosophy is ease of installation, use, and extensibility. The application itself uses the open-access packages...
Riccardo Di Sipio
(Universita e INFN (IT))
24/05/2012, 13:30
Event Processing (track 2)
Poster
Jigsaw provides a collection of tools for high-energy physics analyses. In Jigsaw's paradigm input data, analyses and histograms are factorized so that they can be configured and put together at run-time to give more flexibility to the user.
Analyses are focussed on physical objects such as particles and event shape quantities. These are distilled from the input data and brought to the...
Norman Anthony Graf
(SLAC National Accelerator Laboratory (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
LCIO is a persistency framework and event data model which, as originally presented at CHEP 2003, was developed for the next linear collider physics and detector response simulation studies. Since then, the data model has been extended to also incorporate raw data formats as well as reconstructed object classes. LCIO defines a common abstract user interface (API) and is designed to be...
Norman Anthony Graf
(SLAC National Accelerator Laboratory (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
slic: Geant4 simulation program
As the complexity and resolution of particle detectors increases,
the need for detailed simulation of the experimental setup also
increases. Designing experiments requires efficient tools to
simulate detector response and optimize the cost-benefit ratio
for design options. We have developed efficient and flexible
tools for detailed physics and...
Oskar Wyszynski
(Jagiellonian University (PL))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Shine is the new offline software framework of the NA61/SHINE experiment at the CERN SPS for data reconstruction, analysis and visualization as well as detector simulation.
To allow for a smooth migration to the new framework, as well as to facilitate its validation, our transition strategy foresees to incorporate considerable parts of the old NA61/SHINE reconstruction chain which is based on...
Alain Roy
(University of Wisconsin-Madison)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
We recently completed a significant transition in the Open Science Grid in which we moved our software distribution mechanism from the useful but niche system called Pacman to a community-standard native packaged system (RPM). Despite the challenges, this migration was both useful and necessary. In this paper we explore some of the lessons learned during this transition, lessons which we...
Mr
SON HOANG
(University of Houston)
24/05/2012, 13:30
Event Processing (track 2)
Poster
In the quest to develop a Space Radiation Dosimeter based on the Timepix chip from Medipix2 Collaboration, the fundamental issue is how Dose and Dose-equivalent can be extracted from the raw Timepix outputs. To calculate the Dose-equivalent, each type of potentially incident radiation is given a Quality Factor, also referred to as Relative Biological Effectiveness (RBE). As proposed in the...
Illya Shapoval
(CERN, KIPT)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database...
Valentin Kuznetsov
(Cornell University)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The recent buzzword in IT world is NoSQL. Major players, such as Facebook, Yahoo, Google, etc. are widely adopted different "NoSQL" solutions for their needs. Horizontal scalability, flexible data model and management of big data volumes are only a few advantages of NoSQL. In CMS experiment we use several of them in production environment. Here we present CMS projects based on NoSQL solutions,...
Dr
Daniel DeTone
(University of Michigan)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
Communication and collaboration using stored digital media has recently garnered increasing interest in many facets of business, government and education. This is primarily due to improvements in the quality of cameras and the speed of computers. Digital media serves as an effective alternative in the absence of physical interaction between multiple individuals. Video recordings that allow...
Jakob Lettenbichler
(HEPHY Vienna, Austria),
Moritz Nadler,
Rudi Frühwirth
(Institut fuer Hochenergiephysik (HEPHY))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The Silicon Vertex Detector (SVD) of the Belle II experiment is a newly developed
device with four measurement layers. Track finding in the SVD will be done both in
conjunction with the Central Drift Chamber and in stand-alone mode. The
reconstruction of very-low-momentum tracks in stand-alone mode is a big challenge,
especially in view of the low redundancy and the large expected...
Prof.
Sudhir Malik
(University of Nebraska-Lincoln)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
Since 2009, the CMS experiment at LHC has provided an intensive training on the use of Physics Analysis Tools (PAT), a collection of common analysis tools designed to share expertise and maximise the productivity in the physics analysis. More than ten one-week courses preceded by prerequisite studies have been organized and the feedback from the participants has been carefully analysed. This...
Diogo Raphael Da Silva Di Calafiori
(Eidgenoessische Tech. Hochschule Zuerich (CH))
24/05/2012, 13:30
Online Computing (track 1)
Poster
This paper presents the current architecture of the control and safety systems designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). A complete evaluation of both systems performance during all CMS physics data taking periods is reported, with emphasis on how software and hardware solutions have...
Anton Topurov
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
As elsewhere in today’s computing environment, virtualisation is becoming prevalent in the database management area where HEP laboratories, and industry more generally, seek to deliver improved services whilst simultaneously increasing efficiency. We present here our solutions for the effective management of virtualised databases, building on over five years of experience dating back to...
Mateusz Lechman
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
ALICE (A Large Ion Collider Experiment) is one of the big LHC (Large Hadron Collider) experiments at CERN in Geneve, Switzerland.
The experiment is composed of 18 sub-detectors controlled by an integrated Detector Control System (DCS) that is implemented using the commercial SCADA package PVSS. The DCS includes over 1200 network devices, over 1,000,000 input channels and numerous custom...
Michael Jackson
(EPCC)
24/05/2012, 13:30
Online Computing (track 1)
Poster
Within the Muon Ionization Cooling Experiment (MICE), the MICE Analysis User Software (MAUS) framework performs both online analysis of live data and detailed offline data analysis, simulation, and accelerator design. The MAUS Map-Reduce API parallelizes computing in the control room, ensures that code can be run both offline and online, and displays plots for users in an easily extendable...
Witold Pokorski
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
In this paper we present a new tool for tuning and validation of Monte Carlo (MC) generators, essential in order to have predictive power in the area of high-energy physics (HEP) experiments. With the first year of LHC data being now analyzed, the need for reliable MC generators is very clear. The tool, called MCPLOTS, is composed of a browsable repository of plots comparing HEP event...
Norman Anthony Graf
(SLAC National Accelerator Laboratory (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The ability to directly import CAD geometries into Geant4 is an often requested feature, despite the recognized limitations of the difficulty in accessing proprietary formats, the mismatch between level of detail in producing a part and simulating it, the often disparate approaches to parent-child relationships and the difficulty in maintaining or assigning material definitions to...
Andrew Haas
(SLAC National Accelerator Laboratory)
24/05/2012, 13:30
Event Processing (track 2)
Poster
We are now in a regime where we observe substantial multiple proton-proton collisions within each filled LHC bunch-crossing and also multiple filled bunch-crossings within the sensitive time window of the ATLAS detector. This will increase with increased luminosity in the near future.
Including these effects in Monte Carlo simulation poses significant computing challenges. We present a...
Dr
Andreas Wildauer
(Universidad de Valencia (ES)),
Federico Meloni
(Università degli Studi e INFN Milano (IT)),
Kirill Prokofiev
(New York University (US)),
Simone Pagan Griso
(Lawrence Berkeley National Lab. (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
Presented in this contribution are methods currently developed and used by the ATLAS collaboration to measure the performance of the primary vertex reconstruction algorithms. These methods quantify the amount of additional pile up interactions and help to identify the hard scattering process (the so called primary vertex) in the proton-proton collisions with high accuracy. The correct...
Salvatore Di Guida
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
With LHC producing collisions at larger and larger luminosity, CMS must be able to take high quality data and process them reliably: these tasks need not only correct conditions, but also that those datasets must be promptly available. The CMS condition infrastructure relies on many different pieces, such as hardware, networks, and services, which must be constantly monitored, and any faulty...
Dr
Andrea Valassi
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier/Squid and...
Marian Babik
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is a Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via...
Hege Austrheim Erdal
(Bergen University College (NO))
24/05/2012, 13:30
Online Computing (track 1)
Poster
ALICE (A Large Ion Collider Experiment) is a dedicated heavy ion experiment at the Large Hadron
Collider (LHC). The High Level Trigger (HLT) for ALICE is a powerful, sophisticated tool aimed at compressing the data volume and filtering events with desirable physics content. Several of the major detectors in ALICE are incorporated into HLT to compute real-time event reconstruction, for...
Mr
Andres Abad Rodriguez
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
One of the major goals of the EMI (European Middleware Initiative) project is the integration of several components of the pre-existing middleware (ARC, gLite, UNICORE and dCache) into a single consistent set of packages with uniform distributions and repositories. Those individual middleware projects have been developed in the last decade by tens of development teams and before EMI were all...
Joao Antunes Pequenao
(Lawrence Berkeley National Lab. (US))
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
New types of hardware, like smartphones and tablets, are becoming more available, affordable and popular in the market. Furthermore with the advent of Web2.0 frameworks, Web3D and Cloud computing, the way we interact, produce and exchange content is being dramatically transformed.
How can we take advantage of these technologies to produce engaging applications which can be conveniently used...
Dr
David Lawrence
(Jefferson Lab)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The JANA framework has been deployed and in use since 2007 for development of the GlueX experiment at Jefferson Lab. The multi-threaded reconstruction framework is routinely used on machines with up to 32 cores with excellent scaling. User feedback has also helped to develop JANA into a user-friendly environment for development of reconstruction code and event playback. The basic design of...
Alja Mrak Tadel
(Univ. of California San Diego (US)),
Matevz Tadel
(Univ. of California San Diego (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
Fireworks, the event-display program of CMS, was extended with an advanced geometry visualization package. ROOT's TGeo geometry is used as internal representation, shared among several geometry views. Each view is represented by a GUI list-tree widget, implemented as a flat vector to allow for fast searching, selection, and filtering by material type, node name, and shape type. Display of...
Timur Pocheptsov
(Joint Inst. for Nuclear Research (RU))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
ROOT's graphics works mainly via the TVirtualX class (this includes both GUI and non-GUI graphics). Currently, TVirtualX has two native implementations based on the X11 and Win32 low-level APIs. To make the X11 version work on
OS X we have to install the X11 server (an additional application), but unfortunately, there is no X11 for iOS and so no graphics for mobile devices from Apple -...
Andreas Salzburger
(CERN),
Giacinto Piacquadio
(CERN)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The read-out from individual pixels on planar semi-conductor sensors are grouped into clusters to reconstruct
the location where a charged particle passed through the sensor. The resolution given by individual pixel sizes
is significantly improved by using the information from the charge sharing between pixels.
Such analog cluster creation techniques have been used by the ATLAS...
Mr
Felix Valentin Böhmer
(Technische Universität München)
24/05/2012, 13:30
Event Processing (track 2)
Poster
GENFIT is a framework for track fitting in nuclear and particle physics. Its defining feature is the conceptual independence of the specific detector and field geometry, achieved by modular design of the software.
A track in genfit is a collection of detector hits and a collection of track representations.It can contain hits from different detector types (planar hits, space points,...
Irakli Chakaberia
(Kansas State University)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The rate of performance improvements of the LHC at CERN has had strong
influence on the characteristics of the monitoring tools developed for the
experiments. We present some of the latest additions to the suite of Web
Based Monitoring services for the CMS experiment, and explore the aspects
that address the roughly 20-fold increase in peak instantaneous luminosity
over the course of...
Mr
Laurent Garnier
(LAL-IN2P3-CNRS)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
New developments on visualization drivers in Geant4 software toolkit
Lorenzo Moneta
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
ROOT, a data analysis framework, provides advanced numerical and statistical methods via the ROOT Math work package.
Now that the LHC experiments have started to analyze their data and produce physics results, we have acquired experience in the way these numerical methods are used and the libraries have been consolidated taking into account also the received feedback. At the same time,...
Andrew Norman
(Fermilab)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The NOvA experiment at Fermi National Accelerator Lab features a free running, continuous readout system without dead time, which collects and buffers time-continuous data from over 350,000 readout channels. The raw data must be searched to correlate it with beam spill events from the NuMI beam facility. They are also analyzed in real-time to identify event topologies of interest. The...
Felice Pantaleo
(University of Pisa (IT))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Data analyses based on evaluation of likelihood functions are commonly used in the high-energy physics community for fitting statistical models to data samples. The likelihood functions require the evaluation of several probability density functions on the data. This is accomplished using loops. For the evaluation operations, the standard accuracy is double precision floating point. The...
Miao HE
(Institute of High Energy Physics, Chinese Academy of Sciences)
24/05/2012, 13:30
Event Processing (track 2)
Poster
Neutrino flavor oscillation is characterized by three mixing angles. The Daya Bay reactor antineutrino experiment is designed to determine the last unknown mixing angle $\theta$_{13}. The experiment is located in southern China, near the Daya Bay nuclear power plant. Eight identical liquid scintillator detectors are being installed in three experimental halls, to detect antineutrinos released...
Dmitry Arkhipkin
(Brookhaven National Laboratory)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The STAR Experiment further exploits scalable message-oriented model principles to achieve a high level of control over online data
streams. In this report we present an AMQP-powered Message Interface and Reliable Architecture framework (MIRA), which allows STAR to orchestrate the activities of Metadata Collection, Monitoring, Online QA and several Run-Time / Data Acquisition system...
Artur Szostak
(University of Bergen (NO))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ALICE High Level Trigger (HLT) is a dedicated real-time system for on-line event reconstruction and triggering. Its main goal is to reduce the large volume of raw data that is read out from the detector systems, up to 25 GB/s, by an order of magnitude to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When a...
Dave Dykstra
(Fermi National Accelerator Lab. (US))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each of the central servers at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy squid servers,...
Markus Frank
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
Today's computing elements for software based high level trigger processing (HLT) are based on nodes with multiple cores. Using process based parallelisation to filter particle collisions from the LHCb experiment on such nodes leads to expensive consumption of read-only memory and hence significant cost increase. In the following an approach is presented to fork multiple identical processes...
Sylvain Chapeland
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600...
Mr
Kyle Gross
(Open Science Grid / Indiana University)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
Large distributed computing collaborations, such as the WLCG, face many issues when it comes to providing a working grid environment for their users. One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable...
Mr
Igor Kulakov
(Goethe Universitaet Frankfurt)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The CBM experiment is a future fixed-target experiment at FAIR/GSI (Darmstadt, Germany). It is being designed to study heavy-ion collisions at extremely high interaction rates. The main tracking detectors are the Micro-Vertex Detector (MVD) and the Silicon Tracking System (STS). Track reconstruction in these detectors is very complicated task because of several factors. Up to 1000 tracks per...
Felice Pantaleo
(CERN),
Julien Leduc
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
Data analyses based on evaluation of likelihood functions are commonly used in the high energy physics community for fitting statistical models to data samples. These procedures require several evaluations of these functions and they can be very time consuming. Therefore, it becomes particularly important to have fast evaluations. This paper describes a parallel implementation that allows to...
Dr
Alan Dion
(Brookhaven National Laboratory)
24/05/2012, 13:30
Event Processing (track 2)
Poster
An algorithm is presented which reconstructs helical tracks in a solenoidal magnetic field using a generalized Hough Transform. While the problem of reconstructing helical tracks from the primary vertex can be converted to the problem of reconstructing lines (with 3 parameters), reconstructing secondary tracks requires a full helix to be used (with 5 parameters). The Hough transform memory...
Rolf Seuster
(Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut))
24/05/2012, 13:30
Event Processing (track 2)
Poster
In 2011 the LHC provided excellent data, the integrated luminosity of about 5fb-1 was more than what was expected. The price for this
huge data set is the in and out of time pileup, additional soft events overlaid on top of the interesting event. The reconstruction software is very sensitive to these additional particles in the event, as the reconstruction time increases due to increased...
Andrea Bocci
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented using FPGA and custom ASIC technology, and the High Level Trigger (HLT), implemented running a streamlined version of the CMS offline reconstruction software on a cluster of commercial rack-mounted computers, comprising thousands of CPUs.
The design of a software trigger system requires a...
Johannes Ebke
(Ludwig-Maximilians-Univ. Muenchen (DE))
24/05/2012, 13:30
Event Processing (track 2)
Poster
Historically, HEP event information for final analysis is stored in
Ntuples or ROOT Trees and processed using ROOT I/O, usually resulting in
a set of histograms or tables.
Here we present an alternative data processing framework, leveraging the
Protocol Buffer open-source library, developed and used by Google Inc.
for loosely coupled interprocess communication and serialization.
We...
Dr
Jason Webb
(Brookhaven National Lab)
24/05/2012, 13:30
Event Processing (track 2)
Poster
Faced with the abundance of geometry models available within the HENP community, long running experiments face a daunting challenge: how to migrate legacy GEANT3 based detector geometries to new technologies, such as the ROOT/TGeo framework [1]. One approach, entertained by the community for some time, is to introduce a level of abstraction: implementing the geometry in a higher order...
Gabriela Hoff
(CERN)
24/05/2012, 13:30
Event Processing (track 2)
Poster
Physics models and algorithms operating in the condensed transport scheme - multiple scattering and energy loss of charged particles - play a critical role in the simulation of energy deposition in detectors.
Geant4 algorithms pertinent to this domain involve a number of parameters and physics modeling approaches, which have evolved in the course of the years. Results in the literature...
Prof.
Nobu Katayama
(HIGH ENERGY ACCELERATOR RESEARCH ORGANIZATION)
24/05/2012, 13:30
Event Processing (track 2)
Poster
Dark Energy is one of the most intriguing questions in the field of particle physics and cosmology. We expect the first lignt of Hyper Suprime Cam (HSC) at the Subaru Telescope on top of Mauna Kea in Hawaii island in 2012. HSC will measure the shapes of billions of galaxies precisely to construct the 3D map of the dark matter in the universe, characterizing the properties of dark energy. We...
Dr
Fabio Cossutti
(Universita e INFN (IT))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The production of simulated samples for physics analysis at LHC represents a noticeable organization challenge, because it requires the management of several thousands different workflows. The submission of a workflow to the grid based computing infrastructure is just the arrival point of a long decision process: definition of the general characteristics of a given set of coherent samples,...
Axel Naumann
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
C++11 is a new standard for the C++ language that includes several additions to the core language and that extends the C++ standard library. New features, such as move semantics, are expected to bring performance benefits and as soon as these benefits have been demonstrated, it will undoubtedly become widely adopted in the development of HEP code. However it will be shown that this may well be...
Mr
Pierre Vande Vyvre
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
In November 2009, after 15 years of design and installation, the ALICE experiment started to detect and record the first collisions produced by the LHC. It has been collecting hundreds of millions of events ever since with both proton-proton and heavy ion collision. The future scientific programme of ALICE has been refined following the first year of data taking. The physics targeted beyond...
Graeme Andrew Stewart
(CERN)
24/05/2012, 13:30
Event Processing (track 2)
Poster
Abstract: The ATLAS experiment at the LHC collider recorded more than 3 fb-1 data of pp collisions at the center of mass energy
of 7 TeV by September 2011. The recorded data are promptly reconstructed in two steps at a large computing farm at CERN to provide fast access to high quality data for physics analysis. In the first step a subset of the collision data corresponding to 10 Hz is...
Dr
itay Yavin
(New-York University)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
Searches for new physics by experimental collaborations represent a significant investment in time and resources. Often these searches are sensitive to a broader class of models than they were originally designed to test. It is possible to extend the impact of existing searches through a technique we call 'recasting'. We present RECAST, a framework designed to facilitate the usage of this technique.
Jose Manuel Quesada Molina
(Universidad de Sevilla (ES))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The final stages of a number of generators of inelastic hadron/ion interactions with nuclei in Geant4 are described by native pre-equilibrium and de-excitation models. The pre-compound model is responsible for pre-equilibrium emission of protons, neutrons and light ions. The de-excitation model provides sampling of evaporation of neutrons, protons and light fragments up to magnesium. Fermi...
Fernando Lucas Rodriguez
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The Detector Control System of the TOTEM experiment at the LHC is built with the industrial product WinCC OA (PVSS). The TOTEM system is generated automatically through scripts using as input the detector PBS structure and pinout connectivity, archiving and alarm meta-information, and some other heuristics based on the naming conventions. When those initial parameters and code are modified to...
Danilo Piparo
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The estimation of the compatibility of large amounts of histogram pairs is a recurrent problem in High Energy Physics. The issue is common to several different areas, from software quality monitoring to data certification, preservation and analysis. Given two sets of histograms, it is very important to be able to scrutinize the outcome of several goodness of fit tests, obtain a clear answer...
Douglas Michael Schaefer
(University of Pennsylvania (US))
24/05/2012, 13:30
Online Computing (track 1)
Poster
Since starting in 2010, the Large Hadron Collider (LHC) has produced collisions at an ever increasing rate. The ATLAS experiment
successfully records the collision data with high efficiency and excellent data quality. Events are selected using a three-level trigger system, where each level makes a more rened selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger...
Wouter Verkerke
(NIKHEF (NL))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
RooFit is a library of C++ classes that facilitate data modeling in the ROOT
environment. Mathematical concepts such as variables, (probability density)
functions and integrals are represented as C++ objects. The package provides a
flexible framework for building complex fit models through classes that mimic
math operators. For all constructed models RooFit provides a concise yet
powerful...
Gordon Watts
(University of Washington (US))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
ROOT.NET provides an interface between Microsoft’s Common Language Runtime (CLR) and .NET technology and the ubiquitous particle physics analysis tool, ROOT. ROOT.NET automatically generates a series of efficient wrappers around the ROOT API. Unlike pyROOT, these wrappers are statically typed and so are highly efficient as compared to the Python wrappers. The connection to .NET means that one...
Axel Naumann
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
We will present new approaches to implementing quality control procedures in the development of the ROOT data processing framework. A multi-platform, cloud-based infrastructure is used for supporting the incremental build and test procedures employed in the ROOT software development process. Tests run continuously and a custom generic tool has been adopted for CPU and heap regression...
Zhechka Toteva
(CERN)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
The Information Technology (IT) and the General Services (GS) departments at CERN have decided to combine their extensive experience in support for IT and non-IT services towards a common goal – to bring the services closer to the end user based on ITIL best practice. The collaborative efforts have so far produced definitions for the incident and the request fulfillment processes which are...
Sebouh Paul
(Jefferson Lab)
24/05/2012, 13:30
Event Processing (track 2)
Poster
In the advent of the 12 GeV upgrade at CEBAF, it becomes necessary to create new detectors to accommodate the more powerful beam-line. It follows that new software is needed for tracking, simulation and event display. In the case of CLAS12, the new detector to be installed in Hall B, development has proceeded on new analysis frameworks and runtime environments, such as the Clara (CLAS12...
Mizuki Karasawa
(BNL)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
In BNL, we are planning to establish a federation with different organizations by using a SSO technology - Shibboleth. It provides the underlying mechanism for leveraging institutional authentication and exchanging of user attributes for authorization. This framework will allow us to collaborate not only with organizations inside of BNL but institutions/organizations outside of BNL to be able...
Martin Barisits
(Vienna University of Technology (AT))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The ATLAS Distributed Data Management system stores more than 75PB of physics data across
100 sites globally. Over 8 million files are transferred daily with strongly varying usage
patterns. For performance and scalability reasons it is imperative to adapt and improve
the data management system continuously. Therefore future system modifications in
hardware, software as well as policy,...
Peter Wegner
(Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen, Germany)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The CTA (Cherenkov Telescope Array) project is an initiative to build the next generation ground-based very high energy (VHE) gamma-ray instrument. Compared to current imaging atmospheric Cherenkov telescope experiments CTA will extend the energy range and improve the angular resolution while increasing the sensitivity by a factor of 10. With these capabilities it is expected that CTA will...
Raul Murillo Garcia
(University of California Irvine (US))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ATLAS Cathode Strip Chamber system consists of two end-caps with 16 chambers each. The CSC Readout Drivers (RODs) are purpose-built boards encapsulating 13 DSPs and around 40 FPGAs. The principal responsibility of each ROD is for the extraction of data from two chambers at a maximum trigger rate of 75 kHz. In addition, each ROD is in charge of the setup, control and monitoring of the...
Robert Kutschke
(Femilab)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The Mu2e experiment at Fermilab is in proceeding through its R&D and approval processes. Two critical elements of R&D towards a design that will achieve the physics goals are an end-to-end simulation package and reconstruction code that has reached the stage of an advanced prototype. These codes live within the environment of the experiment's intrastructure software. Mu2e uses art as the...
Mark Hodgkinson
(University of Sheffield),
Rolf Seuster
(Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut) (D)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running...
Tobias Stockmanns
(Forschungszentrum Jülich GmbH)
24/05/2012, 13:30
Event Processing (track 2)
Poster
Modern experiments in hadron and particle physics are searching for more and more rare decays which have to be extracted out of a huge background of particles. To achieve this goal a very high precision of the experiments is required which has to be reached also from the simulation software. Therefore a very detailed description of the hardware of the experiment is needed including also tiny...
Luca Tomassetti
(University of Ferrara and INFN)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 10^36 cm-2 s-1.
Since 2009 the SuperB Computing group is...
Dr
Jack Cranshaw
(Argonne National Laboratory (US))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
TAGs are event-level metadata allowing a quick search for interesting events for further analysis, based on selection criteria defined by the user. They are stored in a file-based format as well as in relational databases. The overall TAG system architecture encompasses a range of interconnected services that provide functionality for the required use cases such as event level selection,...
Federico Ronchetti
(Istituto Nazionale Fisica Nucleare (IT))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ALICE detector yields a huge sample of data, via millions of channels from different sub-detectors. On-line data processing must be applied to select and reduce the data volume in order to increase the significant information in the stored data.
ALICE applies a multi-level hardware trigger scheme where fast detectors are used to feed a three-level deep chain, L0-L2. The High-Level...
Sylvain Chapeland
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The 18 ALICE sub-detectors are regularly calibrated in order to achieve most accurate physics measurements. Some of these procedures are done online in the DAQ (Data Acquisition System) so that calibration...
Linghui Wu
24/05/2012, 13:30
Event Processing (track 2)
Poster
BESIII/BEPCII is a major upgrade of the BESII experiment at the Beijing Electron-Positron Collider (BEPC) for studies of hadron spectroscopy and tau-charm physics. The BESIII detector adopts a small cell helium-based drift chamber (MDC) as the cetral tracking detector. The momentum resolution was deteriorated due to misalignment in the data taking. In order to improve the momentum resolution,...
Gancho Dimitrov
(Brookhaven National Laboratory (US))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The ATLAS experiment at LHC relies on databases for detector online
data-taking, storage and retrieval of configurations, calibrations and
alignments, post data-taking analysis, file management over the grid, job
submission and management, data replications to other computing centers,
etc. The Oracle Relational Database Management System has been addressing
the ATLAS database requirements...
Will Buttinger
(University of Cambridge (GB))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ATLAS Level-1 Trigger is the first stage of event selection for the ATLAS experiment at the LHC. In order to identify the interesting collisions events to be passed on to the next selection stage within a latency of less than 2.5 us, it is based on custom-built electronics. Signals from the Calorimeter and Muon Trigger System are combined in the Central Trigger Processor which processes...
Alexander Oh
(University of Manchester (GB))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ATLAS experiment at CERN's Large Hadron Collider (LHC) has taken data with colliding beams at instantaneous luminosities of 2*10^33 cm^-2 s^-1. The LHC targets to deliver an integrated luminosity 5-fb in the run period 2011 at luminosities of up to 5*10^33 cm^-2 s^-1, which requires dedicated strategies to guard the highest physics output while reducing effectively the event rate.
The...
Amir Farbin
(University of Texas at Arlington (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The ATLAS experiment has collected vast amounts of data with the arrival of the inverse-femtobarn era at the LHC. ATLAS has developed an intricate analysis model with several types of derived datasets, including
their grid storage strategies, in order to make data from O(109) recorded events readily available to physicists for analysis. Several use cases have been considered in the ATLAS...
Andrei Cristian Spataru
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred...
Ruben Domingo Gaspar Aparicio
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
At CERN, and probably elsewhere, centralised Oracle-database services deliver high levels of service performance and reliability but are sometimes perceived as overly rigid and inflexible for initial application development. As a consequence a number of key database applications are running on user-managed MySQL database services. This is all very well when things are going well, but the...
Kerstin Lantzsch
(Bergische Universitaet Wuppertal (DE))
24/05/2012, 13:30
Online Computing (track 1)
Poster
The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by...
Mr
Arthur Franke
(Columbia University)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The Double Chooz reactor antineutrino experiment employs a
network-distributed DAQ divided among a number of computing nodes on a Local
Area Network. The Double Chooz Online Monitor Framework has been developed
to provide short-timescale, real-time monitoring of multiple distributed DAQ
subsystems and serve diagnostic information to multiple clients. Monitor
information can be accessed...
Matthew Toups
(Columbia University)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The Double Chooz experiment searches for reactor neutrino oscillations at the Chooz nuclear power plant. A client/server model is used to coordinate actions among several online systems over TCP/IP sockets. A central run control server synchronizes data-taking among two independent data acquisition (DAQ) systems via a common communication protocol and state machine definition. Calibration...
Luca Magnoni
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
A large experiment like ATLAS at LHC (CERN), with over three thousand members and a shift crew of 15 people running the experiment 24/7, needs an easy and reliable tool to gather all the information concerning the experiment development, installation, deployment and exploitation over its lifetime. With the increasing number of users and the accumulation of stored information since the...
Andrea Negri
(Universita e INFN (IT))
24/05/2012, 13:30
Online Computing (track 1)
Poster
Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment
complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections.
We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing
power is such that a couple of...
Dr
Ivana Hrivnacova
(IPN Orsay, CNRS/IN2P3)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The Virtual Monte Carlo (VMC) provides the abstract interface into the Monte Carlo transport codes: GEANT3, Geant4 and FLUKA. The user VMC based application, independent from the specific Monte Carlo codes, can be then run with all three simulation programs. The VMC has been developed by the ALICE Offline Project and since then it draw attention in more experimental frameworks.
Since its...
Michael Steder
(DESY)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
The H1 data preservation project was started in
2009 as part of the global data preservation in
high-energy physics (DPHEP) initiative. In
order to retain the full potential for future
improvements, the H1 collaboration aims for
level 4 of the DPHEP recommendations, requiring
the full simulation and reconstruction chain to
be available for analysis. A major goal of the
H1 project is...
Eduard Avetisyan
(DESY)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
We discuss the steps and efforts required to secure the continued analysis and data access for the HERMES experiment after the end of the
active collaboration period. The model for such an activity has been developed within the framework of the DPHEP initiative in a close
collaboration of HERA experiments and the DESY IT. For HERMES the preservation scheme foresees a possibility of full data...
Mr
Victor Diez Gonzalez
(CERN fellow)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The LCG Applications Area relies on regular integration testing of the provided software stack. In the past, regular builds have been provided using a system which has been changed and developed constantly adding new features like server-client communication, long-term history of results and a summary web interface using present-day web technologies.
However, the ad-hoc style of software...
Dr
Antony Wilson
(STFC - Science & Technology Facilities Council (GB))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The configuration database (CDB) is the memory of the Muon Ionisation Cooling Experiment (MICE). Its principle aim is to store temporal data associated with the running conditions of the experiment. These data can change on a per run basis (e.g. magnet currents, high voltages), or on long time scales (e.g. cabling, calibration, and geometry). These data are used throughout the life cycle of...
365.
The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality Analysis
Andressa Sivolella Gomes
(Univ. Federal do Rio de Janeiro (BR))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The Tile Calorimeter (TileCal), one of the ATLAS detectors. has four partitions, where each one contains 64 modules and each module has up to 48 PhotoMulTipliers (PMTs), totalizing more than 10,000 electronic channels. The Monitoring and Calibration Web System (MCWS) supports data quality analyses at channels level. This application was developed to assess the detector status and verify its...
Andrzej Dworak
(CERN)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The Controls Middleware (CMW) project was launched over ten years ago. Its main goal was to unify middleware solutions used to operate CERN accelerator complex. A key part of the project, the equipment access library RDA, was based on CORBA, an unquestionable middleware standard at the time. RDA became an operational and critical part of the infrastructure, yet the demanding run-time...
Andrew Norman
(Fermilab)
24/05/2012, 13:30
Online Computing (track 1)
Poster
The NOvA experiment at Fermi National Accelerator Lab, uses a sophisticated timing distribution system to perform synchronization of more than 12,000 front end readout and data acquisition systems at both the near detector and accelerator complex located at Fermilab and at the far detector located 810km away at Ash River, MN. This global synchronization is performed to an absolute clock time...
Roland Sipos
(Hungarian Academy of Sciences (HU))
24/05/2012, 13:30
Event Processing (track 2)
Poster
NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) is an experiment at the CERN SPS using the upgraded NA49 hadron spectrometer. Among its physics goals are precise hadron production measurements for improving calculations of the neutrino beam flux in the T2K neutrino oscillation experiment as well as for more reliable simulations of cosmic-ray air showers. Moreover, p+p, p+Pb and...
Dr
John Marshall
(University of Cambridge (GB))
24/05/2012, 13:30
Event Processing (track 2)
Poster
Pandora is a robust and efficient framework for developing and running pattern-recognition algorithms. It was designed to perform particle flow calorimetry, which requires many complex pattern-recognition techniques to reconstruct the paths of individual particles through fine granularity detectors. The Pandora C++ software development kit (SDK) consists of a single library and a number of...
Mr
Igor Soloviev
(University of California Irvine (US))
24/05/2012, 13:30
Online Computing (track 1)
Poster
To configure data taking run the ATLAS systems and detectors store more than 150 MBytes of data acquisition related configuration information in OKS[1] XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems with configuring of a run caused by XML syntax errors or...
Katarzyna Wichmann
(DESY)
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
A project to allow long term access and physics
analysis of ZEUS data (ZEUS data preservation)
has been established in collaboration with
the DESY-IT group. In the ZEUS approach the
analysis model is based on the Common Ntuple
project, under development since 2006. The real
data and all presently available Monte Carlo
samples are being preserved in a flat ROOT
ntuple format. There is...
Scott Snyder
(Brookhaven National Laboratory (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The final step in a HEP data-processing chain is usually to reduce the data to a `tuple' form which can be efficiently read by interactive analysis tools such as ROOT. Often, this is implemented independently by each group analyzing the data, leading to duplicated effort and needless divergence in the format of the reduced data. ATLAS has implemented a common toolkit for performing this...
Christoph Wasicki
(Deutsches Elektronen-Synchrotron (DE)),
Heather Gray
(CERN),
Simone Pagan Griso
(Lawrence Berkeley National Lab. (US))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The track and vertex reconstruction algorithms of the ATLAS Inner Detector have demonstrated excellent performance in the early data from the LHC. However, the rapidly increasing number of interactions per bunch crossing introduces new challenges both in computational aspects and physics performance. We will discuss the strategy adopted by ATLAS in response to this increasing multiplicity by...
Anthony Morley
(CERN)
24/05/2012, 13:30
Event Processing (track 2)
Poster
The Large Hadron Collider (LHC) at CERN is the world's largest particle accelerator, which collides proton beams at an unprecedented centre of mass energy of 7 TeV.
ATLAS is a multipurpose experiment that records the products of the LHC collisions. In order to reconstruct the trajectories of charged particles produced in these collisions,
ATLAS is equipped with a tracking system (Inner...
Johannes Mattmann
(Johannes-Gutenberg-Universitaet Mainz (DE))
24/05/2012, 13:30
Event Processing (track 2)
Poster
The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of
standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore...
Patrick Czodrowski
(Technische Universitaet Dresden (DE))
24/05/2012, 13:30
Online Computing (track 1)
Poster
Hadronic tau decays play a crucial role in taking Standard Model measurements as well as in the search for physics beyond the Standard Model. However, hadronic tau decays are difficult to identify and trigger on due to their resemblance to QCD jets. Given the large production cross section of QCD processes, designing and operating a trigger system with the capability to efficiently select...
Gordon Watts
(University of Washington (US))
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
Particle physics conferences and experiments generate a huge number of plots and presentations. It is impossible to keep up. A typical conference (like CHEP) will have 100's of plots. A single analysis result from a major experiment will have almost 50 plots. Scanning a conference or sorting out what plots are new is almost a full time job. The advent of multi-core computing and advanced video...
Prof.
Martin Erdmann
(Rheinisch-Westfaelische Tech. Hoch. (DE))
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The Visual Physics Analysis (VISPA) project addresses the typical development cycle of (re-)designing, executing, and verifying an analysis.
It presents an integrated graphical development environment for physics analyses, using the Physics eXtension Library (PXL) as underlying C++ analysis toolkit.
Basic guidance to the project is given by the paradigms of object oriented programming, data...
Maria Alandes Pradillo
(CERN)
24/05/2012, 13:30
Software Engineering, Data Stores and Databases (track 5)
Poster
The EMI project is based on the collaboration of four major middleware projects in Europe, all already developing middleware products and having their pre-existing strategies for developing, releasing and controlling their software artefacts. In total, the EMI project is made up of about thirty development individual teams, called “Product Teams” in EMI. A Product Team is responsible for the...
Dr
Torsten Antoni
(KIT - Karlsruhe Institute of Technology (DE))
24/05/2012, 13:30
Collaborative tools (track 6)
Poster
The xGUS helpdesk template is aimed at NGIs, DCIs and user communities wanting to structure their user support and integrate it with the EGI support.
xGUS contains all basic helpdesk functionalities. It is hosted and maintained at KIT in Germany. Portal administrators from the client DCI or user community can customize the portal to their specific needs. Via web, they can edit the support...