Dr
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
23/03/2009, 08:00
In recent years, it has become more and more evident that software threat communities are taking an
increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a...
Dr
David Lawrence
(Jefferson Lab)
23/03/2009, 08:00
A minimal xpath 1.0 parser has been implemented within the JANA framework that
allows easy access to attributes or tags in an XML document. The motivating
implmentation was to access geometry information from XML files in
the HDDS specification (derived from ATLAS's AGDD). The system allows
components in the reconstruction package to pick out individual numbers
from a collection of XML...
Daniel Colin Van Der Ster
(Conseil Europeen Recherche Nucl. (CERN))
23/03/2009, 08:00
Ganga provides a uniform interface for running ATLAS user analyses on a number of local, batch, and grid backends. PanDA is a pilot-based production and distributed analysis system developed and used extensively by ATLAS. This work presents the implementation and usage experiences of a PanDA backend for Ganga. Built upon reusable application libraries from GangaAtlas and PanDA, the Ganga PanDA...
Mr
Andrey TSYGANOV
(Moscow Physical Engineering Inst. (MePhI))
23/03/2009, 08:00
CERN, the European Laboratory for Particle Physics, located in Geneva - Switzerland, has recently started the Large Hadron Collider (LHC), a 27 km particle accelerator. The CERN Engineering and Equipment Data Management Service (EDMS) provides support for managing engineering and equipment information throughout the entire lifecycle of a project. Based on several both in-house developed and...
Dr
Suren Chilingaryan
(The Institute of Data Processing and Electronics, Forschungszentrum Karlsruhe)
23/03/2009, 08:00
During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only.
Our solution for handling time series, generally slow control...
Dr
David Lawrence
(Jefferson Lab)
23/03/2009, 08:00
Factory models are often used in object oriented
programming to allow more complicated and controlled
instantiation than is easily done with a standard C++ constructor.
The alternative factory model implemented in the
JANA event processing framework addresses issues of
data integrity important to the type of reconstruction
software developed for experimental HENP. The data on...
Ms
Gerhild Maier
(Johannes Kepler Universität Linz)
23/03/2009, 08:00
Grid computing is associated with a complex, large scale, heterogeneous and distributed environment. The combination of different Grid infrastructures, middleware implementations, and job submission tools into one reliable production system is a challenging task. Given the impracticability to provide an absolutely fail-safe system, strong error reporting and handling is a crucial part of...
Dr
David Malon
(Argonne National Laboratory), Dr
Peter Van Gemmeren
(Argonne National Laboratory)
23/03/2009, 08:00
At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance)
provide fertile grounds for development and evaluation of tools for scalable data mining.
It is easy, of course, to apply HEP-specific selection or classification rules to event records
and to label such an exercise "data mining," but our interest is different.
Advanced statistical methods and tools such as...
José Mejia
(Rechenzentrum Garching)
23/03/2009, 08:00
The ATLAS computing Grid consists of several hundred compute clusters distributed around the world as part of the Worldwide LHC Computing Grid (WLCG). The Grid middleware and the ATLAS software which has to be installed on each site, often require certain Linux distribution and sometimes even specific version thereof.
On the other hand, mostly due to maintenance reasons, computer centres...
Dr
John Kennedy
(LMU Munich)
23/03/2009, 08:00
The organisation and operations model of the ATLAS T1-T2 federation/cloud associated to the GridKa T1
in Karlsruhe is described. Attention is paid to cloud level services and the experience gained during
the last years of operation.
The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13
core sites. A well defined and tested operations...
Lassi Tuura
(Northeastern University)
23/03/2009, 08:00
The CMS experiment at the Large Hadron Collider has deployed numerous web-based services in order to serve the collaboration effectively. We present the two-phase authentication and authorisation system in use in the data quality and computing monitoring services, and in the data- and workload management services. We describe our techniques intended to provide a high level of security with...
Marco Clemencic
(European Organization for Nuclear Research (CERN))
23/03/2009, 08:00
An extensive test suite is the first step towards the delivery of robust software, but it is not always easy to implement it, especially in projects with many developers. An easy to use and flexible infrastructure to use to write and execute the tests reduces the work each developer has to do to instrument his packages with tests. At the same time, the infrastructure gives the same look and...
Mr
Ricardo Manuel Salgueiro Domingues da Silva
(CERN)
23/03/2009, 08:00
A frequent source of concern for resource providers is the efficient use of computing resources in their centres. This has a direct impact on requests for new resources.
There are two different but strongly correlated aspects to be
considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage...
Alessandro De Salvo
(Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)
23/03/2009, 08:00
The measurement of the experiment software performances is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation.
In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit...
Dr
Florian Uhlig
(GSI Darmstadt)
23/03/2009, 08:00
One of the challenges of software development for large experiments is to
manage the contributions from globally distributed teams. In order to keep
the teams synchronized a strong quality control is important.
For a software project this means that it has to be tested on all
supported platforms if the project can be build from source,
if it runs and in the end if the program delivers the...
Dr
Antonio Pierro
(INFN-BARI)
23/03/2009, 08:00
The web application service as part of the conditions database system serves applications and users outside the event-processing. The application server is built upon conditions python API in the CMS offline software framework. It responds to http requests on various conditions database instances. The main client of the application server is the conditions database web GUI which currently...
Edward Karavakis
(Brunel University-CERN)
23/03/2009, 08:00
Dashboard is a monitoring system developed for the LHC experiments in order to provide the view of the Grid infrastructure from the perspective of the Virtual Organisation. The CMS Dashboard provides a reliable monitoring system that enables the transparent view of the experiment activities across different middleware implementations and combines the Grid monitoring data with information that...
Lassi Tuura
(Northeastern University)
23/03/2009, 08:00
A central component of the data quality monitoring system of the CMS experiment at the Large Hadron Collider is a web site for browsing data quality histograms. The production servers in data taking provide access to several hundred thousand histograms per run, both live in online as well as for up to several terabytes of archived histograms for the online data taking, Tier-0 prompt...
Natalia Ratnikova
(Fermilab-ITEP(Moscow)-Karlsruhe University(Germany))
23/03/2009, 08:00
The CMS Software project CMSSW embraces more than a thousand packages organized in over a hundred subsystems covering the areas of analysis, event display, reconstruction, simulation, detector description, data formats, framework, utilities and tools. The release integration process is highly automated, using tools developed or adopted by CMS. Packaging in rpm format is a built-in step in the...
Mr
Shahzad Muzaffar
(NORTHEASTERN UNIVERSITY)
23/03/2009, 08:00
The CMS offline software consists of over two million lines of code actively developed by hundreds of developers from all around the world. Optimal builds and distribution of such a large scale system for production and analysis activities for hundreds of sites and multiple platforms are major challenges. Recent developments have not only optimized the whole process but also helped us identify...
Dr
Thomas Kress
(RWTH Aachen, III. Physikal. Institut B)
23/03/2009, 08:00
The Tier-2 centers in CMS are the only location, besides the specialized analysis facility at CERN, where users are able to obtain guaranteed access to CMS data samples. The Tier-1 centers are used primarily for organized processing and storage. The Tier-1s are specified with data export and network capacity to allow the Tier-2 centers to refresh the data in disk storage regularly for...
Dr
Ajit Kumar Mohapatra
(University of Wisconsin, Madison, USA)
23/03/2009, 08:00
The CMS experiment has been using the Open Science Grid, through its US Tier-2 computing centers, from its very beginning for production of Monte Carlo simulations. In this talk we will describe the evolution of the usage patterns indicating the best practices that have been identified. In addition to describing the production metrics and how they have been met, we will also present the...
Dr
Alessandra Fanfani
(on beahlf of CMS - INFN-BOLOGNA (ITALY))
23/03/2009, 08:00
CMS has identified the distributed Tier-2 sites as the primary location for physics analysis. There is a specialized analysis cluster at CERN, but it represents approximately 15% of the total computing available to analysis users. The more than 40 Tier-2s on 4 continents will provide analysis computing and user storage resources for the vast majority of physicists in CMS. The CMS estimate is...
Prof.
Kihyeon Cho
(KISTI)
23/03/2009, 08:00
KISTI (Korea Institute of Science and Technology Information) in Korea is the national headquarter of supercomputer, network, Grid and e-Science. We have been working on cyberinfrastructure for high energy physics experiment, especially CDF experiment and ALICE experiment. We introduce the cyberinfrastructure which includes resources, Grid and e-Science for these experiments. The goal of...
Cédric Serfon
(LMU München)
23/03/2009, 08:00
A set of tools have been developed to ensure the Data Management operations (deletion, movement of data within a site and consistency checks) within the German cloud for ATLAS. These tools that use local protocols which allow a fast and efficient processing are described hereafter and presented in the context of the operational procedures of the cloud. A particular emphasis is put on the...
Dr
Ashok Agarwal
(University of Victoria, Victoria, BC, Canada)
23/03/2009, 08:00
An interface between dCache and the local Tivoli Storage Manager (TSM) tape storage facility has been developed at the University of Victoria (UVic) for High Energy Physics (HEP) applications. The interface is responsible for transferring the data from disk pools to tape and retrieving data from tape to disk pools. It also checks the consistency between the PNFS filename space and the TSM...
Dirk Hufnagel
(Conseil Europeen Recherche Nucl. (CERN))
23/03/2009, 08:00
The CMS Tier 0 is responsible for handling the data in the first period of it's life, from being written to a disk buffer at the CMS experiment site in Cessy by the DAQ system, to the time transfer completes from CERN to one of the Tier1 computing centres. It contains all automatic data movement, archival and processing tasks run at CERN. This includes the bulk transfers of data from Cessy to...
Mr
Adrian Casajus Ramo
(Departament d' Estructura i Constituents de la Materia)
23/03/2009, 08:00
DIRAC, the LHCb community Grid solution, provides access to a vast amount of computing and storage resources to a large number of users. In DIRAC users are organized in groups with different needs and permissions. In order to ensure that only allowed users can access the resources and to enforce that there are no abuses, security is mandatory. All DIRAC services and clients use secure...
Galina Shabratova
(Joint Inst. for Nuclear Research (JINR))
23/03/2009, 08:00
A. Bogdanov3, L. Malinina2, V. Mitsyn2, Y. Lyublev9, Y. Kharlov8, A. Kiryanov4,
D. Peresounko5, E.Ryabinkin5, G. Shabratova2 , L. Stepanova1, V. Tikhomirov3,
W. Urazmetov8, A.Zarochentsev6, D. Utkin2, L. Yancurova2, S. Zotkin8
1 Institute for Nuclear Research of the Russian, Troitsk, Russia;
2 Joint Institute for Nuclear Research, Dubna, Russia;
3 Moscow Engineering Physics Institute,...
Paul Rossman
(Fermi National Accelerator Lab. (Fermilab))
23/03/2009, 08:00
CMS utilizes a distributed infrastructure of computing centers to custodially store data, to provide organized processing resources, and to provide analysis computing resources for users. Integrated over the whole system, even in the first year of data taking, the available disk storage approaches 10 peta bytes of space. Maintaining consistency between the data bookkeeping, the data transfer...
Prof.
Roger Jones
(Lancaster University)
23/03/2009, 08:00
Despite the all too brief availability of beam-related data, much has been learned about the usage patterns and operational requirements of the ATLAS computing model since Autumn 2007. Bottom-up estimates are now more detailed, and cosmic ray running has exercised much of the model in both duration and volume. Significant revisions have been made in the resource estimates, and in the usage of...
Claudio Grandi
(INFN Bologna)
23/03/2009, 08:00
The CMS Collaboration relies on 7 globally distributed Tier-1 computing centers located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centers for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging...
Dr
Tomasz Wlodek
(Brookhaven National Laboratory (BNL)), Dr
Yuri Smirnov
(Brookhaven National Laboratory (BNL))
23/03/2009, 08:00
The PanDA distributed production and analysis system has been in
production use for ATLAS data processing and analysis since late 2005
in the US, and globally throughout ATLAS since early 2008. Its core
architecture is based on a set of stateless web services served by
Apache and backed by a suite of MySQL databases that are the
repository for all Panda information: active and archival...
Mr
Michele De Gruttola
(INFN, Sezione di Napoli - Universita & INFN, Napoli/ CERN)
23/03/2009, 08:00
Reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data.
We will describe here the system put in place in the CMS experiment to populate the database and make condition data promptly available online for the high-level trigger and offline for reconstruction.
The system has been...
Loic Quertenmont
(Universite Catholique de Louvain)
23/03/2009, 08:00
FROG is a generic framework dedicated to visualize events in a given geometry. \newline
It has been written in C++ and use OpenGL cross-platform libraries. It can be used to any particular physics experiment or detector design. The code is very light and very fast and can run on various Operating System. Moreover, FROG is self consistent and does not require installation of ROOT or...
Victor Diez Gonzalez
(Univ. Rov. i Virg., Tech. Sch. Eng.-/CERN)
23/03/2009, 08:00
Geant4 is a toolkit to simulate the passage of particles through
matter, and is widely used in HEP, in medical physics and for space
applications. Ongoing developments and improvements require regular
integration testing for new or modified code.
The current system uses a customised version of the Bonsai Mozilla tool
to collect and select tags for testing, a set of shell and...
Mr
Laurent GARNIER
(LAL-IN2P3-CNRS)
23/03/2009, 08:00
Qt is a powerfull cross-platform application framework , powerful, free (even on Windows), used by lots of people and applications.
That's why, last developments in Geant4 visualization group come with a new driver, based on Qt toolkit. Qt library has OpenGL available, then all 3D scenes could be move by mouse (like in OpenInventor driver).
This driver try to resume all the features already...
Mr
Luiz Henrique Ramos De Azevedo Evora
(CERN)
23/03/2009, 08:00
During the operation, maintenance, and dismantling periods of the ATLAS Experiment, the traceability of all detector equipment must be guaranteed for logistic and safety matters. The running of the Large Hadron Collider will expose the ATLAS detector to radiation. Therefore, CERN shall follow specific regulation from French and Swiss authorities for equipment removal, transport, repair, and...
Dr
Jose Caballero
(Brookhaven National Laboratory (BNL))
23/03/2009, 08:00
Worker nodes on the grid exhibit great diversity, making it difficult to offer uniform processing resources. A pilot job architecture, which probes the environment on the remote worker node before pulling down a payload job, can help. Pilot jobs become smart wrappers, preparing an appropriate environment for job execution and providing logging and monitoring capabilities.
PanDA (Production...
Dr
Bogdan Lobodzinski
(DESY, Hamburg,Germany)
23/03/2009, 08:00
The H1 Collaboration at HERA has entered the period of high precision analyses based on the final data sample. These analyses require a massive production of simulated Monte Carlo (MC) events.
The H1 MC framework is a software for mass MC production on the LCG Grid infrastructure
and on a local batch system created by H1 Collaboration.
The aim of the tool is a full automatization of the...
Dr
Sebastian Böser
(University College London)
23/03/2009, 08:00
Within the last years, the HepMC data format has established itself as the
standard data format for simulation of high-energy physics interactions and is
commonly used by all four LHC experiments. At the energies of the
proton-proton collisisions at the LHC, a full description of the generation of
these events and the subsequent interactions with the detector typically
involves several...
Dr
David Dykstra
(Fermilab)
23/03/2009, 08:00
The CMS experiment requires worldwide access to conditions data by nearly a hundred thousand processing jobs daily. This is accomplished using a software subsystem called Frontier. This system translates database queries into http, looks up the results in a central database at CERN, and caches the results in an industry-standard http proxy/caching server called Squid. One of the most...
Dr
Hartmut Stadie
(Universität Hamburg)
23/03/2009, 08:00
While the Grid infrastructure for the LHC experiments is well suited for batch-like analysis, it does not support the final steps of an analysis on a reduced data set, e.g. the optimization of cuts and derivation of the final plots. Usually this part is done interactively. However, for the LHC these steps might still require a large amount of data. The German "National Analysis Facility"(NAF)...
Dr
Vladimir Korenkov
(Joint Institute for Nuclear Research (JINR))
23/03/2009, 08:00
Different monitoring systems are now extensively used to keep an eye on
real time state of each service of distributed grid infrastructures and
jobs running on the Grid. Tracking current services’ state as well as
the history of state changes allows rapid error fixing, planning future
massive productions, revealing regularities of Grid operation and many
other things. Along with...
Marco Mambelli
(UNIVERSITY OF CHICAGO)
23/03/2009, 08:00
The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation.
The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge.
ATLAS computing infrastructure...
Prof.
Marco Cattaneo
(CERN)
23/03/2009, 08:00
LHCb had been planning to commission its High Level Trigger software and Data Quality monitoring procedures using real collisions data from the LHC pilot run. Following the LHC incident on 19th September 2008, it was decided to commission the system using simulated data.
This “Full Experiment System Test” consists of:
- Injection of simulated minimum bias events into the full HLT farm,...
Luciano Piccoli
(Fermilab)
23/03/2009, 08:00
Large computing clusters used for scientific processing suffer from systemic failures when operated over long continuous periods for executing workflows. Diagnosing job problems and faults leading to eventual failures in this complex environment is difficult, specifically when the success of whole workflow might be affected by a single job failure.
In this paper, we introduce a model-based,...
Alexey Zhelezov
(Physikalisches Institut, Universitaet Heidelberg)
23/03/2009, 08:00
LHC experiments are producing very large volumes of data either accumulated from the detectors or generated via the Monte-Carlo modeling. The data should be processed as quickly as possible to provide users with the input for their analysis. Processing of multiple hundreds of terabytes of data necessitates generation, submission and following a huge number of grid jobs running all over the...
Noriza Satam
(Department of Mathematics, Faculty of Science,Universiti Teknologi Malaysia),
Norma Alias
(Institute of Ibnu Sina, Universiti Teknologi Malaysia,)
23/03/2009, 08:00
New Iterative Alternating Group Explicit (NAGE) is a powerful parallel numerical algorithm for multidimensional temperature prediction. The discretization is based on the finite difference method of partial differential equation (PDE) with parabolic type. The 3-Dimensional temperature visualization is critical since it’s involves large scale of computational complexity. The three fundamental...
Mr
Andrew Baranovski
(FNAL)
23/03/2009, 08:00
In a shared computing environment, activities orchestrated by workflow management systems often need to span organizational and ownership domains. In such a setting, common tasks, such as the collection and display of metrics and debugging information, are challenged by the informational entropy inherent to independently maintained and owned software sub-components. Because such information...
Dr
Xavier Espinal
(PIC/IFAE)
23/03/2009, 08:00
The ATLAS distributed computing activities involve about 200 computing centers distributed world-wide and need people on shift covering 24 hours per day. Data distribution, data reprocessing, user analysis and Monte Carlo event simulation runs continuously. Reliable performance of the whole ATLAS computing community is of crucial importance to meet the ambitious physics goals of the ATLAS...
Alexander Undrus
(BROOKHAVEN NATIONAL LABORATORY, USA)
23/03/2009, 08:00
The system of automated multi-platform software nightly builds is a major
component in ATLAS collaborative software organization and code approval
scheme. Code developers from more than 30 countries use about 25
branches of nightly releases for testing new packages, validation of patches to
existing software, and migration to new platforms and compilers. The successful
nightly releases...
Dr
Philippe Calfayan
(Ludwig-Maximilians-University Munich)
23/03/2009, 08:00
The PROOF (Parallel ROOT Facility) library is designed to perform parallelized
ROOT-based analyses with a heterogeneous cluster of computers.
The installation, configuration and monitoring of PROOF have been carried out
using the Grid-Computing environments dedicated to the ATLAS experiment.
A PROOF cluster hosted at the Leibniz Rechenzentrum (LRZ) and consisting of a
scalable amount of...
Dr
Alfio Lazzaro
(Universita and INFN, Milano / CERN)
23/03/2009, 08:00
MINUIT is the most common package used in high energy physics for numerical minimization of multi-dimensional functions. The major algorithm of this package, MIGRAD, searches for the minimum by using the gradient function. For each minimization iteration, MIGRAD requires the calculation of the first derivatives for each parameter of the function to be minimized.
Minimization is required for...
Dr
Niklaus Berger
(Institute for High Energy Physics, Beijing)
23/03/2009, 08:00
Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely...
Alexandre Vaniachine
(Argonne National Laboratory),
David Malon
(Argonne National Laboratory),
Jack Cranshaw
(Argonne National Laboratory),
Jérôme Lauret
(Brookhaven National Laboratory),
Paul Hamill
(Tech-X Corporation),
Valeri Fine
(Brookhaven National Laboratory)
23/03/2009, 08:00
High Energy and Nuclear Physics (HENP) experiments store petabytes of event data and terabytes of calibrations data in ROOT files. The Petaminer project develops a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files.
Our project is addressing a problem of efficient navigation to petabytes of HENP experimental data...
Mr
Igor Sfiligoi
(Fermilab)
23/03/2009, 08:00
Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs.
Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In...
Marco Clemencic
(European Organization for Nuclear Research (CERN))
23/03/2009, 08:00
The LHCb software, from simulation to user analysis, is based on the framework Gaudi. The extreme flexibility that the framework provides, through its component model and the system of plug-ins, allows us to define a specific application as its behavior more than its code. The application is then described by some configuration files read by the bootstrap executable (shared by all...
Ms
Elena Oliver
(Instituto de Fisica Corpuscular (IFIC) - Universidad de Valencia)
23/03/2009, 08:00
The ATLAS data taking is due to start in Spring 2009. In this contribution and given the expectation, a rigorous evaluation of the readiness parameters of the Spanish ATLAS
Distributed Tier-2 is given.
Special attention will be paid to the readiness to perform Physics Analysis from different
points of view: Network Efficiency, Data Discovery, Data Management, Production of...
Mr
Olivier Couet
(CERN)
23/03/2009, 08:00
The ROOT framework provides many visualization techniques. Lately several new ones have been implemented. This poster will present all the visualization techniques ROOT provides highlighting the best use one can do of each of them.
Prof.
Gordon Watts
(UNIVERSITY OF WASHINGTON)
23/03/2009, 08:00
ROOT.NET provides an interface between Microsoft’s Common Language Runtime (CLR) and .NET technology and the ubiquitous particle physics analysis tool, ROOT. This tool automatically generates a series of efficient wrappers around the ROOT API. Unlike pyROOT, these wrappers are statically typed and so are highly efficient as compared to the Python wrappers. The connection to .NET means that one...
Mr
Jan KAPITAN
(Nuclear Physics Inst., Academy of Sciences, Praha)
23/03/2009, 08:00
High Energy Nuclear Physics (HENP) collaborations’ experience show that the computing resources available from a single site are often not sufficient nor satisfy the need of remote collaborators eager to carry their analysis in the fastest and most convenient way. From latencies in the network connectivity to the lack interactivity, having fully functional software stack on local resources is...
Ms
Jaroslava Schovancova
(Institute of Physics, Prague), Dr
Jiri Chudoba
(Institute of Physics, Prague)
23/03/2009, 08:00
The Pierre Auger Observatory studies ultra-high energy cosmic rays.
Interactions of these particles with the nuclei of air gases at energies
many orders of magnitude above the current accelerator capabilities induce
unprecedented extensive air showers in the atmosphere. Different interaction
models are used to describe the first interactions in such showers and their
predictions are...
Dr
Simon Metson
(H.H. Wills Physics Laboratory)
23/03/2009, 08:00
In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at...
Dr
Ricardo Graciani Diaz
(Universidad de Barcelona)
23/03/2009, 08:00
The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks,...
Dr
Dagmar Adamova
(Nuclear Physics Institute AS CR)
23/03/2009, 08:00
Czech Republic (CR) has been participating in the LHC Computing Grid
project (LCG) ever since 2003 and gradually, a middle-sized Tier2 center
has been built in Prague, delivering computing services for national HEP
experiments groups including the ALICE project at the LHC. We present a
brief overview of the computing activities and services being performed in
the CR for the ALICE...
Pier Paolo Ricci
(INFN CNAF)
23/03/2009, 08:00
In the framework of WLCG, the Tier-1 computing centres have very stringent requirements in the sector of the data storage, in terms of size, performance and reliability.
Since some years, at the INFN-CNAF Tier-1 we have been using two distinct storage systems: Castor as tape-based storage solution (also known as the
D0T1 storage class in the WLCG language) and the General Parallel File...
Mr
Matti Kortelainen
(Helsinki Institute of Physics)
23/03/2009, 08:00
We study the performance of different ways of running a physics analysis in preparation for the analysis of petabytes of data in the LHC era. Our test cases include running the analysis code in a Linux cluster with a single thread in ROOT, with the Parallel ROOT Facility (PROOF), and in parallel via the Grid interface with the ARC middleware. We use of the order of millions of Pythia8...
Dr
Monica Verducci
(INFN Roma)
23/03/2009, 08:00
The ATLAS Muon Spectrometer is the outer part of the ATLAS detector at LHC. It has been designed to detect charged particles exiting the barrel and end-cap calorimeters and to measure their momentum in the pseudorapidity range |η| < 2.7. The challenge performance in momentum measurements needs an accurate monitoring of detector and calibration parameters and an high complex architecture to...
Dr
Vincent Garonne
(CERN)
23/03/2009, 08:00
The DQ2 Distributed Data Management system is the system developed and used by ATLAS for handling very large datasets. It encompasses data bookkeeping, managing of largescale production transfers as well as endusers
data access requests.
In this paper, we will describe the design and implementation of the DQ2 accounting service. It collects different data usage informations in order to show...
Dr
Solveig Albrand
(LPSC)
23/03/2009, 08:00
AMI is the main interface for searching for ATLAS datasets using physics metadata criteria.
AMI has been implemented as a generic database management framework which allows parallel searching over many catalogues, which may have differing schema, and may be distributed geographically, using different RDBMS.
The main features of the web interface will be described; in particular the powerful...
Dr
Daniele Bonacorsi
(Universita & INFN, Bologna)
23/03/2009, 08:00
The CMS Facilities and Infrastructure Operations group is responsible for providing and maintaining a working distributed computing fabric with a consistent working environment for Data operations and the physics user community. Its mandate is to maintain the core CMS computing services; ensure the coherent deployment of Grid or site specific components (such as workload management, file...
Dr
Lee Lueking
(FERMILAB)
23/03/2009, 08:00
The CMS experiment has implemented a flexible and powerful approach enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to its physics data. In addition to the existing WEB based and programmatic API, a generalized query system has been designed and built. This...
Dr
Andrea Sartirana
(INFN-CNAF)
23/03/2009, 08:00
The CMS experiment is preparing for data taking in many computing activities, including the testing, deployment and operation of various storage solutions to support the computing workflows of the experiment. Some Tier-1 and Tier-2 centers supporting the collaboration are deploying and commissioning StoRM storage systems. That is, posix-based disk storage systems on top of which StoRM...
Zoltan Mathe
(UCD Dublin)
23/03/2009, 08:00
The LHCb Bookkeeping is a system for the storage and retrieval of meta data associated with LHCb datasets. e.g. whether it is real or simulated data, which running period it is associated with, how it was processed and all the other relevant characteristics of the files.
The meta data are stored in an oracle database which is interrogated using services provided by the LHCb DIRAC3...
Hubert Degaudenzi
(European Organization for Nuclear Research (CERN))
23/03/2009, 08:00
The installation of the LHCb software is handled by a single python script: install_project.py. This bootstrap
script is unique by allowing the installation of software projects on various operating system (Linux, Windows,
MacOSX). It is designed for the LHCb software deployment for a single user or for multiple users, in a shared area or on the Grid. It retrieves the software packages and...
Bertrand Bellenot
(CERN)
23/03/2009, 08:00
Description of the new implementation of the ROOT browser
Dr
Hubert Degaudenzi
(CERN),
Karol Kruzelecki
(Cracow University of Technology-Unknown-Unknown)
23/03/2009, 08:00
The core software stack both from the LCG Application Area and LHCb consists of more than
25 C++/Fortran/Python projects build for about 20 different configurations on Linux, Windows
and MacOSX. To these projects, one can also add about 20 external software packages (Boost, Python, Qt,
CLHEP, ...) which have also to be build for the same configurations. It order to reduce the
time of...
Ilektra Christidi
(Physics Department - Aristotle Univ. of Thessaloniki)
23/03/2009, 08:00
The ATLAS detector has been designed to exploit the full discovery potential of the LHC proton-proton collider at CERN, at the c.m. energy of 14 TeV. Its Muon Spectrometer (MS) has been optimized to measure final state muons from those interactions with good momentum resolution (3-10% for momentum of 100GeV/c-1TeV/c).
In order to ensure that the hardware, DAQ and reconstruction software of...
Dr
Mine Altunay
(FERMILAB)
23/03/2009, 08:00
Open Science Grid stakeholders invariably depend on multiple
infrastructures to build their community-based distributed systems.
To meet this need, OSG has built new gateways with TeraGrid, Campus
Grids, and Regional Grids (NYSGrid, BrazilGrid). This has brought new
security challenges for the OSG architecture and operations. The
impact of security incidents now has a larger scope and...
Bertrand Bellenot
(CERN)
23/03/2009, 08:00
Description of the ROOT event recorder, a GUI testing and validation tool.
David Chamont
(Laboratoire Leprince-Ringuet (LLR)-Ecole Polytechnique-Unknown)
23/03/2009, 08:00
The same as many experiments, FERMI is storing its data within ROOT trees. A very common activity of physicists is the tuning of selection criteria which define the events of interest, thus cutting and pruning the ROOT trees so to extract all the data linked to those specific events. It is rather straighforward to write a ROOT script so to skim a single kind of data, for example the...
Dr
Richard Wilkinson
(California Institute of Technology)
23/03/2009, 08:00
In 2008, the CMS experiment made the transition
from a custom-parsed language for job configuration
to using Python. The current CMS software release
has over 180,000 lines of Python configuration code.
We describe the new configuration system, the
motivation for the change, the transition
itself, and our experiences with the new
configuration language.
Dr
Oliver Gutsche
(FERMILAB)
23/03/2009, 08:00
The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a release validation process for quality assurance which enables the developers to compare to previous releases and references.
This process provides the developers with reconstructed datasets of real data and MC samples....
Tatsiana Klimkovich
(RWTH Aachen University)
23/03/2009, 08:00
VISPA is a novel development environment for high energy physics analyses which enables physicists to combine graphical and textual work. A physics analysis cycle consists of prototyping, performing, and verifying the analysis. The main feature of VISPA is a multipurpose window for visual steering of analysis steps, creation of analysis templates, and browsing physics event data at different...
Prof.
Rodriguez Jorge Luis
(Florida Int'l University)
23/03/2009, 08:00
The CMS experiment will generate tens of petabytes of data per year, data that will be processed, moved and stored in large computing facilities at locations all over the globe. Each of these facilities deploys complex and sophisticated hardware and software components which require dedicated expertise lacking at many of the university and institutions wanting access to the data as soon as it...
Jiri Drahos
(chair of the Academy of Sciences of the Czech Republic),
Vaclav Hampl
(rector of the Charles University in Prague),
Vaclav Havlicek
(rector of the Czech Technical University in Prague)
23/03/2009, 09:00
Plenary
Prof.
Sergio Bertolucci
(CERN)
23/03/2009, 09:30
The LHC Machine and Experiments: Status and Prospects
Dr
Neil Geddes
(RAL)
23/03/2009, 10:00
A personal review of WLCG and the readiness for first real LHC data, highlighting some particular successes, concerns and challenges that lie ahead.
Dr
Lucas Taylor
(Northeastern U., Boston)
23/03/2009, 14:00
The CMS Experiment at the LHC is establishing a global network of inter-connected "CMS Centres" for controls, operations and monitoring. These support: (1) CMS data quality monitoring, detector calibrations, and analysis; and (2) computing operations for the processing, storage and distribution of CMS data.
We describe the infrastructure, computing, software, and communications, systems...
Dr
Johannes Gutleber
(CERN)
23/03/2009, 14:00
The CMS data acquisition system is made of two major subsystems: event building and event filter.
The presented paper describes the architecture and design of the software that processes the data
flow in the currently operating experiment. The central DAQ system relies heavily on industry
standard networks and processing equipment. Adopting a single software infrastructure in
all...
Dr
Peter Elmer
(PRINCETON UNIVERSITY)
23/03/2009, 14:00
Performance of an experiment's simulation, reconstruction and analysis
software is of critical importance to physics competitiveness and making
optimum use of the available budget. In the last 18 months the performance
improvement program in the CMS experiment has produced more than a ten-fold
gain in reconstruction performance alone, a significant reduction in mass
storage system...
Dr
Zhen Xie
(Princeton University)
23/03/2009, 14:00
Non-event data describing detector conditions change with time and
come from different data sources. They are accessible by physicists
within the offline event-processing applications for precise calibration of reconstructed data as well as for data-quality control purposes.
Over the past three years CMS has developed and deployed a software
system managing such data. Object-relational...
Dr
Jeremy Coles
(University of Cambridge - GridPP)
23/03/2009, 14:00
During 2008 we have seen several notable changes in the way the LHC experiments have tried to tackle outstanding gaps in the implementation of their computing models. The development of space tokens and changes in job submission and data movement tools are key examples. The first section of this paper will review these changes and the technical/configuration impacts they have had at the site...
Dr
Jakub Moscicki
(CERN IT/GS), Dr
Patricia Mendez Lorenzo
(CERN IT/GS)
23/03/2009, 14:00
Recently a growing number of various applications have been quickly and successfully enabled on the Grid by the CERN Grid application support team. This allowed the applications to achieve and publish large-scale results in short time which otherwise would not be possible.
The examples of successful Grid applications include the medical and particle physics simulation (Geant4, Garfield),...
Alexandre Vaniachine
(Argonne),
Rodney Walker
(LMU Munich)
23/03/2009, 14:20
During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic workflow, ATLAS database scalability tests provided feedback for Conditions DB software optimization and allowed precise determination of required distributed database resources. In distributed...
Valentin Kuznetsov
(Cornell University)
23/03/2009, 14:20
The CMS experiment has a distributed computing model, supporting thousands of physicists at hundreds of sites around the world. While this is a suitable solution for "day to day" work in the LHC era there are edge use-cases that Grid solutions do not satisfy. Occasionally it is desirable to have direct access to a file on a users desktop or laptop; for code development, debugging or examining...
Mr
Adrian Casajus Ramo
(Departament d' Estructura i Constituents de la Materia)
23/03/2009, 14:20
Traditionally interaction between users and the Grid is done with command line tools. However, these tools are difficult to use by a non-expert user providing minimal help and generating outputs not always easy to understand especially in case of errors. Graphical User Interfaces are typically limited to providing access to the monitoring or accounting information and concentrate on some...
Mr
Giulio Eulisse
(NORTHEASTERN UNIVERSITY OF BOSTON (MA) U.S.A.)
23/03/2009, 14:20
In 2007 the CMS experiment first reported some initial findings on the
impedance mismatch between HEP use of C++ and the current generation
of compilers and CPUs. Since then we have continued our analysis of
the CMS experiment code base, including the external packages we use.
We have found that large amounts of C++ code has been written largely
ignoring any physical reality of the...
Tobias Koenig
(Karlsruhe Institute of Technology (KIT))
23/03/2009, 14:20
Offering sustainable Grid services to users and other computing centres is the main aim of GridKa, the German Tier-1 centre of the WLCG infrastructure. The availability and reliability of IT services directly influences the customers’ satisfaction as well as the reputation of the service provider and not to forget the economical aspects. It is thus important to concentrate on processes and...
Werner Wiedenmann
(University of Wisconsin)
23/03/2009, 14:20
Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and...
Mr
SooHyung Lee
(Korea Univ.)
23/03/2009, 14:40
The real time data analysis at next generation experiments is a challenge because of their enormous data rate and size. The SuperKEKB experiment, the upgraded Belle experiment, requires to process 100 times larger data of current one taken at 10kHz. The offline-level data analysis is necessary in the HLT farm for the efficient data reduction.
The real time processing of huge data is also...
Ms
Maite Barroso
(CERN),
Nicholas Thackray
(CERN)
23/03/2009, 14:40
A review of the evolution of WLCG/EGEE grid operations
Authors: Maria BARROSO, Diana BOSIO, David COLLADOS, Maria DIMOU, Antonio RETICO, John SHADE, Nick THACKRAY, Steve TRAYLEN, Romain WARTEL
As the EGEE grid infrastructure continues to grow in size, complexity and usage, the task of ensuring the
continued, uninterrupted availability of the grid services to the ever increasing number...
Dr
Daniel van der Ster
(CERN)
23/03/2009, 14:40
Ganga has been widely used for several years in Atlas, LHCb and a handful of other communities in the context of the EGEE project. Ganga provides a simple yet powerful interface for submitting and managing jobs to a variety of computing backends. The tool helps users configuring applications and keeping track of their work. With the major release of version 5 in summer 2008, Ganga's main...
Mr
Jeremy Herr
(U. of Michigan)
23/03/2009, 14:40
The ATLAS Collaboratory Project at the University of Michigan has been a leader in the area of collaborative tools since 1999. Its activities include the development of standards, software and hardware tools for lecture archiving, and making recommendations for videoconferencing and remote teaching facilities. Starting in 2006 our group became involved in classroom recordings, and in early...
Zachary Marshall
(Caltech, USA & Columbia University, USA)
23/03/2009, 14:40
The ATLAS Simulation validation project is done in two distinct phases. The first one is the computing validation, the second being the physics performance that must be tested and compared to available data. Infrastructure needed at each stage of validation is here described. In ATLAS software development is controlled by nightly builds to check stability and performance. The complete...
Laura Perini
(INFN Milano),
Tiziana Ferrari
(INFN CNAF)
23/03/2009, 15:00
International research collaborations increasingly require secure sharing of resources owned by the partner organizations and distributed among different administration domains. Examples of resources include data, computing facilities (commodity computer clusters, HPC systems, etc.), storage space, metadata from remote archives, scientific instruments, sensors, etc. Sharing is made possible...
Dr
Douglas Smith
(STANFORD LINEAR ACCELERATOR CENTER)
23/03/2009, 15:00
The Babar experiment produced one of the largest datasets in high
energy physics. To provide for many different concurrent analyses
the data is skimmed into many data streams before analysis can begin,
multiplying the size of the dataset both in terms of bytes and number
of files. As a large scale problem of job management and data
control, the Babar Task Manager system was...
Dr
Thomas Kittelmann
(University of Pittsburgh)
23/03/2009, 15:00
We present an event display for the ATLAS Experiment, called Virtual Point
1 (VP1), designed initially for deployment at point 1 of the LHC, the
location of the ATLAS detector. The Qt/OpenGL based application provides
truthful and interactive 3D representations of both event and non-event
data, and now serves a general-purpose role within the experiment. Thus,
VP1 is used both online (in...
Dr
Dimitri BOURILKOV
(University of Floria)
23/03/2009, 15:00
A key feature of collaboration in large scale scientific projects is
keeping a log of what and how is being done - for private use and
reuse and for sharing selected parts with collaborators and peers,
often distributed geographically on an increasingly global scale.
Even better if this log is automatic, created on the fly while
a scientist or software developer is working in a habitual...
Mrs
Ruth Pordes
(FERMILAB)
23/03/2009, 15:20
The Open Science Grid usage has ramped up more than 25% in the past twelve months due to both the increase in throughput of the core stakeholders – US LHC, LIGO and Run II – and increase in usage by non-physics communities. We present and analyze this ramp up together with the issues encountered and implications for the future.
It is important to understand the value of collaborative...
Philippe Galvez
(California Institute of Technology (CALTECH))
23/03/2009, 15:20
The EVO (Enabling Virtual Organizations) system is based on a new distributed and unique architecture, leveraging the 10+ years of unique experience of developing and operating large distributed production based collaboration systems. The primary objective being to provide to the High Energy and Nuclear Physics experiments a system/service that meet their unique requirements of usability,...
Kovalskyi Dmytro
(University of California, Santa Barbara)
23/03/2009, 15:20
Fireworks is a CMS event display which is specialized for the physics
studies case. This specialization allows to use a stylized rather
than 3D accurate representation when it's appropriate. Data handling
is greatly simplified by using only reconstructed information and
ideal geometry. Fireworks provides an easy to use interface which
allows a physicist to concentrate only on the data to...
Dr
Fabrizio Furano
(Conseil Europeen Recherche Nucl. (CERN))
23/03/2009, 15:20
The Scalla/Xrootd software suite is a set of tools and suggested methods useful to build scalable, fault tolerant and high performance storage systems for POSIX-like data access. One of the most important recent development efforts is to implement technologies able to deal with the characteristics of Wide Area Networks, and find solutions in order to allow data analysis applications to...
Ms
Chiara Zampolli
(CERN)
23/03/2009, 15:20
The ALICE experiment is the dedicated heavy-ion experiment at the CERN LHC and will take data with a bandwidth of up to 1.25 GB/s. It consists of 18 subdetectors that interact with five online systems (DAQ, DCS, ECS, HLT and Trigger). Data recorded are read out by DAQ in a raw data stream produced by the subdetectors. In addition the subdetectors produce conditions data derived from the raw...
Dr
David Malon
(Argonne National Laboratory), Dr
Elizabeth Gallas
(University of Oxford)
23/03/2009, 15:40
Metadata--data about data--arise in many contexts, from many diverse sources,
and at many levels in ATLAS.
Familiar examples include run-level, luminosity-block-level, and event-level metadata, and,
related to processing and organization, dataset-level and file-level metadata,
but these categories are neither exhaustive nor orthogonal.
Some metadata are known a priori, in advance of...
Dr
Donatella Lucchesi
(University and INFN Padova)
23/03/2009, 15:40
The CDF II experiment has been taking data at FNAL since 2001. The CDF computing architecture has evolved from initially using dedicated computing farms to using decentralized Grid-based resources on the EGEE grid, Open Science Grid and FNAL Campus grid.
In order to deliver high quality physics results in a timely manner to a running experiment,
CDF has had to adapt to Grid with minimum...
Dr
Erik Gottschalk
(Fermi National Accelerator Laboratory (FNAL))
23/03/2009, 15:40
We describe the use of professional-quality high-definition (HD) videoconferencing systems for daily HEP experiment operations and large-scale media events.
For CMS operations at the Large Hadron Collider, we use such systems for permanently running "telepresence" communications between the CMS Control Room in France and major offline CMS Centres at CERN, DESY, and Fermilab, and with a...
Dr
Alexei Klimentov
(BNL)
23/03/2009, 15:40
We present our experience with distributed reprocessing of the LHC beam
and cosmic ray data taken with the ATLAS detector during 2008/2009.
Raw data were distributed from CERN to ATLAS Tier-1 centers, reprocessed
and validated. The reconstructed data were consolidated at CERN and ten WLCG
ATLAS Tier-1 centers and made available for physics analysis.
The reprocessing was done...
Prof.
Gordon Watts
(UNIVERSITY OF WASHINGTON)
23/03/2009, 15:40
The DZERO Level 3 Trigger and data acquisition system has been successfully running since March of 2001, taking data for the DZERO experiment located at the Tevatron at the Fermi National Laboratory. Based on a commodity parts, it reads out 65 VME front end crates and delivers the 250 MB of data to one of 1200 processing cores for a high level trigger decision at a rate of 1 kHz. Accepted...
Oliver Gutsche
(FERMILAB)
23/03/2009, 15:40
The CMS software stack currently consists of more than 2 million lines of
code developed by over 250 authors with a new version being released every
week. CMS has setup a central release validation process for quality
assurance which enables the developers to compare the performance to
previous releases and references.
This process provides the developers with reconstructed datasets of...
Michele Michelotto
(INFN + Hepix)
23/03/2009, 16:30
The SPEC INT benchmark has been used as a performance reference for computing in the HEP community for the past 20 years. The SPEC CPU INT 2000 (SI2K) unit of performance has been used by the major HEP experiments both in the Computing Technical Design Report for the LHC experiments and in the evaluation of the Computing Centres. At recent HEPiX meetings several HEP sites have reported...
Dr
Jack Cranshaw
(Argonne National Laboratory), Dr
Qizhi Zhang
(Argonne National Laboratory)
23/03/2009, 16:30
ATLAS has developed and deployed event-level selection services based upon event metadata records ("tags")
and supporting file and database technology.
These services allow physicists to extract events that satisfy their selection predicates from any stage
of data processing and use them as input to later analyses.
One component of these services is a web-based Event-Level Selection...
Mr
Gilles Mathieu
(STFC, Didcot, UK)
23/03/2009, 16:30
All grid projects have to deal with topology and operational information like resource distribution, contact lists and downtime declarations. Storing, maintaining and publishing this information properly is one of the key elements to successful grid operations. The solution adopted by EGEE and WLCG projects is a central repository that hosts this information and makes it available to users and...
Daniel Sonnick
(University of Applied Sciences Kaiserslautern)
23/03/2009, 16:30
In LHCb raw data files are created on a high-performance storage
system using a custom, speed-optimized file-writing software. The
file-writing is orchestrated by a data-base, which represents the
life-cycle of a file and is the entry point for all operations related
to files such as run-start, run-stop, file-migration, file-pinning
and ultimately file-deletion.
File copying to the...
Dr
Jose Hernandez
(CIEMAT)
23/03/2009, 16:50
Establishing efficient and scalable operations of the CMS distributed
computing system critically relies on the proper integration,
commissioning and scale testing of the data and workfload management
tools, the various computing workflows and the underlying computing
infrastructure located at more than 50 computing centres worldwide
interconnected by the Worldwide LHC Computing...
Mr
Matteo Marone
(Universita degli Studi di Torino - Universita & INFN, Torino)
23/03/2009, 16:50
The CMS detector at LHC is equipped with a high precision lead tungstate
crystal electromagnetic calorimeter (ECAL).
The front-end boards and the photodetectors are monitored using a network
of DCU (Detector Control Unit) chips located on the detector electronics.
The DCU data are accessible through token rings controlled by an XDAQ
based software component.
Relevant parameters are...
Dr
Christopher Jones
(Fermi National Accelerator Laboratory)
23/03/2009, 16:50
The CMS Offline framework stores provenance information within CMS's standard ROOT event data files. The provenance information is used to track how every data product was constructed including what other data products were read in order to do the construction. We will present how the framework gathers the provenance information, the efforts necessary to minimize the space used to store the...
David Lawrence
(Jefferson Lab)
23/03/2009, 16:50
Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. This system allows constants to be retrieved through a single line
of C++ code with most of the context implied by the run currently being analyzed. The API is designed to support everything from databases, to web
services to flat files...
Mr
Levente HAJDU
(BROOKHAVEN NATIONAL LABORATORY)
23/03/2009, 16:50
Processing datasets on the order of tens of terabytes is an onerous task, faced by production coordinators everywhere. Users solicit data productions and, especially for simulation data, the vast amount of parameters (and sometime incomplete requests) point at the need for a tracking, control and archiving all requests made so a coordinated handling could be made by the production team.
With...
Mr
Xin Zhao
(Brookhaven National Laboratory,USA)
23/03/2009, 17:10
ATLAS Grid production, like many other VO applications, requires the
software packages to be installed on remote sites in advance. Therefore,
a dynamic and reliable system for installing the ATLAS software releases
on Grid sites is crucial to guarantee the timely and smooth start of
ATLAS production and reduce its failure rate.
In this talk, we discuss the issues encountered in the...
Norbert Neumeister
(Purdue University)
23/03/2009, 17:10
We present a Web portal for CMS Grid submission and management. Grid portals can deliver complex grid solutions to users without the need to download, install and maintain specialized software, or worrying about setting up site-specific components. The goal is to reduce the complexity of the user grid experience and to bring the full power of the grid to physicists engaged in LHC analysis...
Giovanni Petrucciani
(SNS & INFN Pisa, CERN)
23/03/2009, 17:10
The CMS Physics Analysis Toolkit (PAT) is presented. The PAT is a high-level analysis layer enabling the development of common analysis efforts across and within Physics Analysis Groups. It aims at fulfilling the needs of most CMS analyses, providing both ease-of-use for the beginner and flexibility for the advanced user. The main PAT concepts are described in detail and some examples from...
Mr
Barthélémy von Haller
(CERN)
23/03/2009, 17:10
ALICE is one of the four experiments installed at the CERN Large Hadron Collider (LHC), especially designed for the study of heavy-ion collisions.
The online Data Quality Monitoring (DQM) is an important part of the data acquisition (DAQ) software. It involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data.
This paper presents the final...
Dr
Ilse Koenig
(GSI Darmstadt)
23/03/2009, 17:10
Since 2002 the HADES experiment at GSI employs an Oracle database for storing of all parameters relevant for simulation and data analysis. The implementation features a flexible, multi-dimensional and easy-to-use version management. Direct interfaces to the ROOT-based analysis and simulation framework HYDRA allow for an automated initialization based on actual or historic data which is needed...
Dr
Hannes Sakulin
(European Organization for Nuclear Research (CERN))
23/03/2009, 17:30
The CMS Data Acquisition cluster, which runs around 10000 applications, is configured dynamically at run time. XML configuration documents determine what applications are executed on each node and over what networks these applications communicate. Through this mechanism the DAQ System may be adapted to the required performance, partitioned in order to perform (test-) runs in parallel, or...
Dr
Graeme Andrew Stewart
(University of Glasgow)
23/03/2009, 17:30
The ATLAS Production and Distributed Analysis System (PanDA) is a key
component of the ATLAS distributed computing infrastructure. All ATLAS
production jobs, and a substantial amount of user and group analysis
jobs, pass through the PanDA system which manages their execution on
the grid. PanDA also plays a key role in production task definition
and the dataset replication request system....
Mr
Philippe Canal
(Fermilab)
23/03/2009, 17:30
One of the main strength of ROOT I/O is its inherent support for schema evolution. Two distinct modes are supported, one manual via a hand coded Streamer function and one fully automatic via the ROOT StreamerInfo. One draw back of the Streamer function is that they are not usable by TTrees in split mode. Until now, the automatic schema evolution mechanism could not be customized by the...
Dr
James Letts
(Department of Physics-Univ. of California at San Diego (UCSD))
23/03/2009, 17:50
During normal data taking CMS expects to support potentially as many as 2000 analysis users. In 2008 there were more than 800 individuals who submitted a remote analysis job to the CMS computing infrastructure. The bulk of these users will be supported at the over 40 CMS Tier-2 centers. Supporting a globally distributed community of users on a globally distributed set of computing clusters is...
Mr
Pavel JAKL
(Nuclear Physics Inst., Academy of Sciences, Praha)
23/03/2009, 17:50
Any experiment facing Peta bytes scale problems is in need for a highly scalable mass storage system (MSS) to keep a permanent copy of their valuable data. But beyond the permanent storage aspects, the sheer amount of data makes complete dataset availability onto “live storage” (centralized or aggregated space such as the one provided by Scala/Xrootd) cost prohibitive implying that a dynamic...
Giovanni Polese
(Lappeenranta Univ. of Technology)
23/03/2009, 17:50
The Resistive Plate Chamber system is composed
by 912 double-gap chambers equipped with about 10^4 frontend
boards. The correct and safe operation of the RPC system
requires a sophisticated and complex online Detector Control
System, able to monitor and control 10^4 hardware devices
distributed on an area of about 5000 m^2. The RPC DCS acquires,
monitors and stores about 10^5 parameters...
Dr
Frank Gaede
(DESY IT)
23/03/2009, 17:50
The International Linear Collider is the next large accelerator project in
High Energy Physics.
The ILD Detector Concept is one of three international working groups that
are developing a detector concept for the ILC. It has been created by merging the two
concept studies LDC and GLD in 2007.
ILD uses a modular C++ application framework (Marlin) that is
based on the international...
Dr
Douglas Smith
(STANFORD LINEAR ACCELERATOR CENTER)
23/03/2009, 18:10
The Babar experiment has been running at the SLAC National Accelerator
Laboratory for the past nine years, and has measured 500 fb-1 of data.
The final data run for the experiment finished in April 2008. Once the
data was finished the final processing of all Babar data was started.
This was the largest computing production effort in the history of
Babar, including a reprocessing of...
Alina Corso-Radu
(University of California, Irvine)
23/03/2009, 18:10
ATLAS is one of the four experiments in the Large Hadron Collider (LHC) at CERN which has been put in operation this year. The challenging experimental environment and the extreme detector complexity required development of a highly scalable distributed monitoring framework, which is currently being used to monitor the quality of the data being taken as well as operational conditions of the...
Prof.
Harvey Newman
(Caltech)
23/03/2009, 18:10
I will review the status, outlook recent technology trends and
state of the art developments in the major networks serving the
high energy physics community in the LHC era.
I will also cover the progress in reducing or closing the Digital Divide
separating scientists in several world regions from the mainstream,
from the perspective of the ICFA Standing Committee on
Inter-regional Connectivity.
Andressa Sivolella Gomes
(Universidade Federal do Rio de Janeiro (UFRJ))
23/03/2009, 18:10
The ATLAS detector consists of four major components: inner tracker, calorimeter, muon
spectrometer and magnet system. In the Tile Calorimeter (TileCal), there are 4 partitions, each partition
has 64 modules and each module has up to 48 channels. During the ATLAS commissioning phase, a
group of physicists need to analyze the Tile Calorimeter data quality, generate reports and update...
Mr
Aatos Heikkinen
(Helsinki Institute of Physics, HIP)
24/03/2009, 08:00
We present a new Geant4 physics list prepared for nuclear physics applications
in the domain dominated by spallation.
We discuss new Geant4 models based on the translation of
INCL intra-nuclear cascade and ABLA de-excitation codes in C++
and used in the physic list.
The INCL model is well established for targets heavier than Aluminium
and projectile energies from ~ 150 MeV up to 2.5...
Dimosthenis Sokaras
(N.C.S.R. Demokritos, Institute of Nuclear Physics)
24/03/2009, 08:00
Well established values for the X-ray fundamental parameters (fluorescence yields, characteristic lines branching ratios, mass absorption coefficients, etc.) are very important but not adequate for an accurate reference-free quantitative X-Ray Fluorescence (XRF) analysis. Secondary ionization processes following photon induced primary ionizations in matter may contribute significantly to the...
Karsten Koeneke
(Deutsches Elektronen-Synchrotron (DESY))
24/03/2009, 08:00
In the commissioning phase of the ATLAS experiment, low-level Event Summary Data (ESD) are analyzed to evaluate the performance of the individual subdetectors, the performance of the reconstruction and particle identification algorithms, and obtain calibration coefficients. In the GRID model of distributed analysis, these data must be transferred to Tier-1 and Tier-2 sites before they can be...
Dr
Rudi Frühwirth
(Institut fuer Hochenergiephysik (HEPHY)-Oesterreichische Akademi)
24/03/2009, 08:00
Reconstruction of interaction vertices is an essential step in the reconstruction chain of a modern collider experiment such as CMS; the primary ("collision") vertex is reconstructed in every
event within the CMS reconstruction program, CMSSW.
However, the task of finding and fitting secondary ("decay") vertices also plays an important role in several physics cases such as the reconstruction...
Jan Amoraal
(NIKHEF),
Wouter Hulsbergen
(NIKHEF)
24/03/2009, 08:00
We report on an implementation of a global chisquare algorithm
for the simultaneous alignment of all tracking systems in the
LHCb detector. Our algorithm uses hit residuals from the
standard LHCb track fit which is based on a Kalman filter. The
algorithm is implemented in the LHCb reconstruction framework
and exploits the fact that all sensitive detector elements have
the same geometry...
Dr
Edmund Widl
(Institut für Hochenergiephysik (HEPHY Vienna))
24/03/2009, 08:00
One of the main components of the CMS experiment is the Inner Tracker. This device, designed to measure the trajectories of charged particles, is composed of approximately 16,000 planar silicon detector modules, which makes it the biggest of its kind. However, systematical measurement errors, caused by unavoidable inaccuracies in the construction and assembly phase, reduce the precision of the...
Stefan Kluth
(Max-Planck-Institut für Physik)
24/03/2009, 08:00
We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image...
Dr
David Lawrence
(Jefferson Lab)
24/03/2009, 08:00
Automatic ROOT tree creation is achived in the JANA
Event Processing Framework through a special plugin.
The janaroot plugin can automatically define a TTree
from the data objects passed though the framework
without using a ROOT dictionary. Details on how this
is achieved as well as possible applications will be
presented.
Robert Petkus
(Brookhaven National Laboratory)
24/03/2009, 08:00
Gluster, a free cluster file-system scalable to several peta-bytes, is under evaluation at the RHIC/USATLAS Computing Facility. Several production SunFire x4500 (Thumper) NFS servers were dual-purposed as storage bricks and aggregated into a single parallel file-system using TCP/IP as an interconnect. Armed with a paucity of new hardware, the objective was to simultaneously allow traditional...
Dr
Peter Kreuzer
(RWTH Aachen IIIA)
24/03/2009, 08:00
The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW...
Andrea Di Simone
(INFN Roma2)
24/03/2009, 08:00
Resistive Plate Chambers (RPC) are used in ATLAS to provide the first
level muon trigger in the barrel region. The total size of the system is
about 16000 m2, readout by about 350000 electronic channels.
In order to reach the needed trigger performance, a precise knowledge of
the detector working point is necessary, and the high number of readout
channels calls for severe requirements on...
Dr
Silvia Maselli
(INFN Torino)
24/03/2009, 08:00
The calibration process of the Barrel Muon DT System of CMS as developed and tuned during the recent cosmic data run is presented. The calibration data reduction method, the full work flow of the procedure and final results are presented for real and simulated data.
Mr
James Jackson
(H.H. Wills Physics Laboratory - University of Bristol)
24/03/2009, 08:00
The UK LCG Tier-1 computing centre located at the Rutherford Appleton Laboratory is responsible for the custodial storage and processing of the raw data from all four LHC experiments; CMS, ATLAS, LHCb and ALICE. The demands of data import, processing, export and custodial tape archival place unique requirements on the mass storage system used. The UK Tier-1 uses CASTOR as the storage...
Rodrigo Sierra Moral
(CERN)
24/03/2009, 08:00
Scientists all over the world collaborate with the CERN laboratory day by day. They must be able to communicate effectively on their joint projects at any time, so telephone conferences become indispensable and widely used. The traditional conference system, managed by 6 switchboard operators, was hosting more than 20000 hours and 5500 conference per year. However, the system needed to be...
Mr
Carlos Ghabrous
(CERN)
24/03/2009, 08:00
As a result of the tremendous development of GSM services over the last years, the number of related services used by organizations has drastically increased. Therefore, monitoring GSM services is becoming a business critical issue in order to be able to react appropriately in case of incident.
In order to provide with GSM coverage all the CERN underground facilities, more than 50 km of...
Dr
Lucas Taylor
(Northeastern U., Boston)
24/03/2009, 08:00
The CMS Experiment at the LHC is establishing a global network of inter-connected "CMS Centres" for controls, operations and monitoring at CERN, Fermilab, DESY and a number of other sites in Asia, Europe, Russia, South America, and the USA.
"ci2i" ("see eye to eye") is a generic Web tool, using Java and Tomcat, for managing: hundreds of displays screens in many locations; monitoring...
Mr
Stuart Wakefield
(Imperial College)
24/03/2009, 08:00
ProdAgent is a set of tools to assist in producing various data products such as Monte Carlo simulation, prompt reconstruction, re-reconstruction and skimming
In this paper we briefly discuss the ProdAgent architecture, and focus on the experience in using this system in recent computing challenges, feedback from these challenges, and future work. The computing challenges have proven...
Johanna Fleckner
(CERN / University of Mainz)
24/03/2009, 08:00
T Cornelissen on behalf of the ATLAS inner detector software group
Several million cosmic tracks were recorded during the combined ATLAS runs in Autumn of 2008. Using these cosmic ray events as well as first beam events, the software infrastructure of the inner detector of the ATLAS experiment (pixels and microstrips silicon detectors as well as straw tubes withadditional transition...
David Futyan
(Imperial College, University of London)
24/03/2009, 08:00
The CMS experiment has developed a powerful framework to ensure the
precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating...
Mr
Gheni Abla
(General Atomics)
24/03/2009, 08:00
Increasing utilization of the Internet and convenient web technologies has made the web-portal a major application interface for remote participation and control of scientific instruments. While web-portals have provided a centralized gateway for multiple computational services, the amount of visual output often is overwhelming due to the high volume of data generated by complex scientific...
Sunanda Banerjee
(Fermilab, USA)
24/03/2009, 08:00
CMS is looking forward to tune detector simulation using the forthcoming collision data from LHC. CMS established a task force in February 2008 in order to understand and reconcile the discrepancies observed between the CMS calorimetry simulation and the test beam data recorded during 2004 and 2006. Within this framework, significant effort has been made to develop a strategy of tuning fast...
Robert Petkus
(Brookhaven National Laboratory)
24/03/2009, 08:00
Over the last (2) years, the USATLAS Computing Facility at BNL has managed a highly performant, reliable, and cost effective dCache storage cluster using SunFire x4500/4540 (Thumper/Thor) storage servers. The design of a discreet storage cluster signaled a departure from a model where storage resides locally on a disk-heavy compute farm. The consequent alteration of data flow mandated a...
Prof.
Gordon Watts
(UNIVERSITY OF WASHINGTON)
24/03/2009, 08:00
Particle physics conferences lasting a week (like CHEP) can have 100’s of talks and posters presented. Current conference web interfaces (like Indico) are well suited to finding a talk by author or by time-slot. However, browsing the complete material in a modern large conference is not user friendly. Browsing involves continually making the expensive transition between HTML viewing and...
Dr
Martin Aleksa (for the LAr conference committee)
(CERN)
24/03/2009, 08:00
The Liquid Argon (LAr) calorimeter is a key detector component in the ATLAS experiment at the LHC, designed to provide precision measurements of electrons, photons, jets and missing transverse energy. A critical element in the precision measurement is the electronic calibration.
The LAr calorimeter has been installed in the ATLAS cavern and filled with liquid argon since 2006. The...
Mrs
Elisabetta Ronchieri
(INFN CNAF)
24/03/2009, 08:00
Many High Energy Physics experiments must share and transfer large volumes of data. Therefore, the maximization of data throughput is a key issue, requiring detailed analysis and setup optimization of the underlying infrastructure and services. In Grid computing, the data transfer protocol called GridFTP is widely used for efficiently transferring data in conjunction with various types of file...
Marc Deissenroth,
Marc Deissenroth
(Universität Heidelberg)
24/03/2009, 08:00
We report results obtained with different track-based
algorithms for the alignment of the LHCb detector with first
data. The large-area Muon Detector and Outer Tracker have been
aligned with a large sample of tracks from cosmic rays. The
three silicon detectors --- VELO, TT-station and Inner Tracker
--- have been aligned with beam-induced events from the LHC
injection line. We compare...
Dr
Pablo Cirrone
(INFN-LNS)
24/03/2009, 08:00
Geant4 is a Monte Carlo toolkit describing transport and interaction of particles with matter. Geant4 covers all particles and materials, and its geometry description allows for complex geometries.
Initially focused on high energy applications, the use of Geant4 is growing also in different like radioprotection, dosimetry, space radiation and external radiotherapy with proton and carbon...
Luca Lista
(INFN Sezione di Napoli)
24/03/2009, 08:00
We present a parser to evaluate expressions and boolean selections that is applied on CMS event data for event filtering and analysis purposes. The parser is based on boost spirit grammar definition, and uses Reflex dictionary for class introspections. The parser allows a natural definition of expressions and cuts in users configuration, and provides good run-time performances compared to...
Douglas Orbaker
(University of Rochester)
24/03/2009, 08:00
The experiments at the Large Hadron Collider (LHC) will start their search for answers to some of the remaining puzzles of particle physics in 2008. All of these experiments rely on a very precise Monte Carlo Simulation of the physical and technical processes in the detectors.
A fast simulation has been developed within the CMS experiment, which is between 100-1000 times faster than its...
Lorenzo Moneta
(CERN), Prof.
Nikolai GAGUNASHVILI
(University of Akureyri, Iceland)
24/03/2009, 08:00
Weighted histograms are often used for the estimation of a probability density functions in High Energy Physics. The bin contents of a weighted histogram can be considered as a sum of random variables with random number of terms. A generalization of the Pearson’s chi-square statistics for weighted histograms and for weighted histograms with unknown normalization has been recently proposed...
Prof.
Vladimir Ivantchenko
(CERN, ESA)
24/03/2009, 08:00
The process of multiple scattering of charge particles is an important component of Monte Carlo transport. At high energy it defines deviation of particles from ideal tracks and limitation of spatial resolution. Multiple scattering of low-energy electrons defines energy response and resolution of electromagnetic calorimeters. Recent progress in development of multiple scattering models within...
Ian Gable
(University of Victoria)
24/03/2009, 08:00
Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization...
Cano Ay
(University of Goettingen)
24/03/2009, 08:00
HepMCAnalyser is a tool for generator validation and comparisons.
It is a stable, easy-to-use and extendable framework
allowing for easy access/integration to generator level analysis.
It comprises a class library with benchmark physics processes to analyse
HepMC generator output and to fill root histogramms. A web-interface is
provided to display all or selected histogramms, compare...
Dr
Federico Calzolari
(Scuola Normale Superiore - INFN Pisa)
24/03/2009, 08:00
High availability has always been one of the main problems for a data center. Till now high availability was achieved by host per host redundancy, a highly expensive method in terms of hardware and human costs. A new approach to the problem can be offered by virtualization.
Using virtualization, it is possible to achieve a redundancy system for all the services running on a data center. This...
Simon Taylor
(Jefferson Lab)
24/03/2009, 08:00
The future GlueX detector in Hall D at Jefferson Lab is a large acceptance (almost 4pi) spectrometer
designed to facilitate the study of the excitation of the gluonic field
binding quark--anti-quark pairs into mesons.
A large solenoidal magnet will provide a 2.2-Tesla field that will be used
to momentum-analyze the charged particles emerging from a liquid hydrogen
target. The...
Kati Lassila-Perini
(Helsinki Institute of Physics HIP)
24/03/2009, 08:00
Complete and up-to-date documentation is essential for efficient data analysis in a large and complex collaboration like CMS. Good documentation reduces the time spent in problem solving
for users and software developers.
The scientists in our research environment do not necessarily have the interests or skills of professional technical writers. This results in inconsistencies in the...
Mrs
Ianna Osborne
(NORTHEASTERN UNIVERSITY)
24/03/2009, 08:00
Geneva, 10 September 2008. The first beam in the Large Hadron Collider at CERN was successfully steered around the full 27 kilometers of the world¿s most powerful particle accelerator at 10h28 this morning. This historic event marks a key moment in the transition from over two decades of preparation to a new era of scientific discovery. (http://www.interactions.org/cms/?pid=1026796)
From...
Dr
Monica Verducci
(INFN RomaI)
24/03/2009, 08:00
ATLAS is a large multipurpose detector, presently in the final phase of construction at LHC, the CERN Large Hadron Collider accelerator. In ATLAS the muon detection is performed by a huge magnetic spectrometer, built with the Monitored Drift Tube (MDT) technology. It consists of more than 1,000 chambers and 350,000 drift tubes, which have to be controlled to a spatial accuracy better than 10...
Mitja Majerle
(Nuclear Physics institute AS CR, Rez)
24/03/2009, 08:00
Monte Carlo codes MCNPX and FLUKA are used to analyze the experiments on
simplified Accelerator Driven Systems, which are performed at the Joint
Institute for Nuclear Research Dubna. At the experiments, protons or
deuterons with the energy in the GeV range are directed to thick, lead
targets surrounded by different moderators and neutron multipliers. Monte
Carlo simulations of these...
Dr
David Lawrence
(Jefferson Lab)
24/03/2009, 08:00
Multi-threading is a tool that is not only well suited to high statistics
event analysis, but is particularly useful for taking advantage of the
next generation many-core CPUs. The JANA event processing framework has
been designed to implement multi-threading through use of posix
threads. Thoughtful implementation allows reconstruction packages to be
developed that are thread enabled...
Dr
Rosy Nikolaidou
(CEA Saclay)
24/03/2009, 08:00
ATLAS is one of the four experiments at the Large Hadron Collider (LHC) at CERN. This experiment has been designed to study a large range of physics including searches for previously unobserved phenomena such as the Higgs Boson and super-symmetry. The ATLAS Muon Spectrometer (MS) is optimized to measure final state muons in a large momentum range, from a few GeV up to TeV. Its momentum...
Mr
Igor Mandrichenko
(FNAL)
24/03/2009, 08:00
Fermilab is a high energy physics research lab that maintains a highly dynamic
network which typically supports around 15,000 active nodes.
Due to the open nature of the scientific research conducted at FNAL,
the portion of the network used to support open scientific research
requires high bandwidth connectivity to numerous collaborating institutions
around the world, and must...
Dr
Yaodong CHENG
(Institute of High Energy Physics,Chinese Academy of Sciences)
24/03/2009, 08:00
Some large experiments at IHEP will generate more than 5 Petabytes of data in the next few years, which brings great challenges for data analysis and storage. CERN CASTOR version 1 was firstly deployed at IHEP in 2003, but now it is difficult to meet the new requirements. Taking into account the issues of management, commercial software etc., we don’t update CASTOR from version 1 to version 2....
Dr
Peter Van Gemmeren
(Argonne National Laboratory)
24/03/2009, 08:00
In ATLAS software, TAGs are event metadata records that can be stored in various technologies, including ROOT files and relational databases. TAGs are used to identify and extract events that satisfy certain selection predicates, which can be coded as SQL-style queries.
Several new developments in file-based TAG infrastructure are presented.
TAG collection files support in-file metadata...
Andreu Pacheco
(IFAE Barcelona),
Davide Costanzo
(University of Sheffield),
Iacopo Vivarelli
(INFN and University of Pisa),
Manuel Gallas
(CERN)
24/03/2009, 08:00
The ATLAS experiment recently entered the data taking phase, with the
focus shifting from software development to validation.
The ATLAS software has to be both robust to process large datasets and
produce the high quality output needed for the experiment scientific
exploitation. The validation process is discussed in this talk,
starting from the validation of the nightly builds and...
Keith Rose
(Dept. of Physics and Astronomy-Rutgers, State Univ. of New Jerse)
24/03/2009, 08:00
The silicon pixel detector in CMS contains approximately 66 million
channels, and will provide extremely high tracking resolution for the experiment. To ensure the data collected is valid, it must be monitored continuously at all levels of acquisition and reconstruction. The Pixel Data Quality Monitoring process ensures that the detector, as well as the data acquisition and reconstruction...
Dr
Alessandra Doria
(INFN Napoli)
24/03/2009, 08:00
The large potential storage and computing power available in the modern grid and data centre infrastructures enable the development of the next generation grid-based computing paradigm, in which a large number of clusters are interconnected through high speed networks. Each cluster is composed of several or often hundreds of computers and devices each with its own specific role in the grid. In...
Dr
Maria Grazia Pia
(INFN GENOVA)
24/03/2009, 08:00
A R&D project, named NANO5, has been recently launched at INFN to address fundamental methods in radiation transport simulation and revisit Geant4 kernel design to cope with new experimental requirements.
The project, that gathers an international collaborating team, focuses on simulation at different scales in the same environment. This issue requires novel methodological approaches to...
Mr
Danilo Piparo
(Universitaet Karlsruhe)
24/03/2009, 08:00
RSC is a software framework based on the RooFit technology and born for the CMS experiment community, whose scope is to allow the modelling and combination of multiple analysis channels together with the accomplishment of statistical studies. That is performed through a variety of methods described in the literature implemented as classes. The design of these classes is oriented to the...
Luca Dell'Agnello
(INFN)
24/03/2009, 08:00
In the framework of WLCG, the Tier-1 computing centres have
very stringent requirements in the sector of the data storage,
in terms of size, performance and reliability.
Since some years, at the INFN-CNAF Tier-1 we have been using
two distinct storage systems: Castor as tape-based storage
solution (also known as the D0T1 storage class in the WLCG language) and the General Parallel...
Dr
Szymon Gadomski
(DPNC, University of Geneva)
24/03/2009, 08:00
Computing for ATLAS in Switzerland has two Tier-3 sites with several years of experience, owned by Universities of Berne and Geneva. They have been used for ATLAS Monte Carlo production, centrally controlled via the NorduGrid, since 2005. The Tier-3 sites are under continuous development.
In case of Geneva the proximity of CERN leads to additional use cases, related to commissioning of...
Prof.
Gordon Watts
(UNIVERSITY OF WASHINGTON), Dr
Laurent Vacavant
(CPPM)
24/03/2009, 08:00
The ATLAS detector, one of the two collider experiments at the Large Hadron Collider, will take high energy collision data for the first time in 2009. A general purpose detector, its physics program encompasses everything from Standard Model physics to specific searches for beyond-the-standard-model signatures. One important aspect of separating the signal from large Standard Model backgrounds...
John Chapman
(Dept. of Physics, Cavendish Lab.)
24/03/2009, 08:00
The ATLAS digitization project is steered by a top-level PYTHON digitization package which ensures uniform and consistent configuration across the subdetectors. The properties of the digitization algorithms were tuned to reproduce the detector response seen in lab tests, test beam data and cosmic ray running. Dead channels and noise rates are read from database tables to reproduce conditions...
Simone Frosali
(Dipartimento di Fisica - Universita di Firenze)
24/03/2009, 08:00
The CMS Silicon Strip Tracker (SST) consists of 25000 silicon microstrip sensors covering an area of 210m2 and 10 million readout channels. Starting from December 2007 the SST has been inserted and connected inside the CMS experiment and since summer 2008 it has been commissioned using cosmic muons with and without magnetic field. During these data taking the performance of the SST have been...
Dr
Gabriele Benelli
(CERN PH Dept (for the CMS collaboration))
24/03/2009, 08:00
The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the cpu performance estimates given by the reference benchmark for HEP...
Roberto Valerio
(Cinvestav Unidad Guadalajara)
24/03/2009, 08:00
Decision tree learning constitutes a suitable approach to classification due to its ability to partition the input (variable) space into regions of class-uniform events, while providing a structure amenable to interpretation (as opposed to other methods such as neural networks). But an inherent limitation of decision tree learning is the progressive lessening of the statistical support of the...
Dr
Ma Xiang
(Institute of High energy Physics, Chinese Academy of Sciences)
24/03/2009, 08:00
The BEPCII/BESIII(Beijing Electron Positron Collider / Beijing Spectrometer) had been installed and operated successfully in July 2008 and has been commissioning since Sep. 2008. The luminosity has reached 1.3*1032 cm-2s-1@489mA*530mA with 90 bunches now. About 13M psi(2S) physics data is collected by BESIII.
The offline data analysis system of BESIII have been tested and operated to handle...
Rodrigues Figueiredo Eduardo
(University Glasgow)
24/03/2009, 08:00
The reconstruction of charged particles in the LHCb tracking
systems consists of two parts. The pattern recognition links
the signals belonging to the same particle. The track fitter
running after the pattern recognition extracts the best
parameter estimate out of the reconstructed tracks. A dedicated
Kalman-Fitter is used for this purpose. The track model
employed in the fit is based on...
Xie Yuguang
(Institute of High energy Physics, Chinese Academy of Sciences)
24/03/2009, 08:00
The new spectrometer for the challenging physics in the tau-charm energy region, BESIII, has been constructed and gone into the commissioning phase at BEPCII, the upgraded e+e- collider with peak luminosity up to 10^33cm^-2s^-1 in Beijing, China. The BESIII muon detector will mainly contribute to the distinguishing muons from hadrons, especially the pions. The Resistive Plate Chambers(RPCs)...
Andrea Dotti
(INFN and Università Pisa)
24/03/2009, 08:00
The challenging experimental environment and the extreme complexity of modern high-energy physics experiments make online monitoring an essential tool to assess the quality of the acquired data.
The Online Histogram Presenter (OHP) is the ATLAS tool to display histograms produced by the online monitoring system. In spite of the name, the Online Histogram Presenter is much more than just a...
Mr
Gilbert Grosdidier
(LAL/IN2P3/CNRS)
24/03/2009, 08:00
The study and design of a very ambitious petaflop cluster exclusively dedicated to Lattice QCD simulations started in early ’08 among a consortium of 7 laboratories (IN2P3, CNRS, INRIA, CEA) and 2 SMEs. This consortium received a grant from the French ANR agency in July, and the PetaQCD project kickoff is expected to take place in January ’09. Building upon several years of fruitful...
Zachary Marshall
(Caltech, USA & Columbia University, USA)
24/03/2009, 08:00
The Simulation suite for ATLAS is in a mature phase ready to cope with the challenge of the 2009 data. The simulation framework already integrated in the ATLAS framework (Athena) offers a set of pre-configured applications for full ATLAS simulation, combined test beam setups, cosmic ray setups and old standalone test-beams. Each detector component was carefully described in all details and...
Fred Luehring
(Indiana University)
24/03/2009, 08:00
The ATLAS Experiment, with over 2000 collaborators, needs efficient and effective means of communicating information. The Collaboration has been using the TWiki Web at CERN for over three years and now has more than 7000 web pages, some of which are protected. This number greatly exceeds the number of “static” HTML pages, and in the last year, there has been a significant migration to the...
Dr
Peter Speckmayer
(CERN)
24/03/2009, 08:00
The toolkit for multivariate analysis, TMVA, provides a large set of advanced multivariate analysis techniques for signal/background classification. In addition, TMVA now also contains regression analysis, all embedded in a framework capable of handling the pre-processing of the data and the evaluation of the output, thus allowing a simple and convenient use of multivariate techniques. The...
Mr
Andrey Lebedev
(GSI, Darmstadt / JINR, Dubna)
24/03/2009, 08:00
The Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt is being designed for a comprehensive measurement of hadron and lepton production in heavy-ion collisions from 8-45 AGeV beam energy, producing events with large track multiplicity and high hit density. The setup consists of several detectors including as tracking detectors the silicon tracking system...
Mr
Bruno Lenzi
(CEA - Saclay)
24/03/2009, 08:00
Muons in the ATLAS detector are reconstructed by combining the information from the Inner Detectors and the Muon Spectrometer (MS), located in the outermost part of the experiment. Until they reach the MS, muons traverse typically 100 radiation lengths (X0) of material, most part instrumented by the electromagnetic and hadronic calorimeters.
The proper account for multiple scattering and...
Dr
Ingo Fröhlich
(Goethe-University)
24/03/2009, 08:00
Due to the fact, that experimental setups are usually not suited to cover the
complete full solid angle, event generators are very important tools for
experiments. Here, theoretical calculations provide valuable input as they
can describe specific distributions for parts of the kinematic variables very
precicely. The caveat is that an event has several degrees of freedom
which can be...
Prof.
Vladimir Ivantchenko
(CERN, ESA)
24/03/2009, 08:00
The standard electromagnetic physics packages of Geant4 are used for simulation of particle transport and HEP detector response. The requirements to the precision and stability of computations are strong, for example, calorimeter response for ATLAS and CMS should be reproduced well within 1%. To keep and control long-stand quality of the package the software suites for validation and...
Dr
Tomasz Szumlak
(Glasgow)
24/03/2009, 08:00
The LHCb experiment is dedicated to studying CP violation and rare decays phenomena.
In order to achieve these physics goals precise tracking and vertexing around
the interaction point is crucial. This is provided by the VELO (VErtex LOcator)
silicon detector. After digitization, large FPGAs are employed to run several
algorithms to suppress noise and reconstruct clusters. This is...
Christian Helft
(LAL/IN2P3/CNRS)
24/03/2009, 08:00
IN2P3, the institute bringing together HEP laboratories in France along CEA's IRFU, opened a videoconferencing service in 2002 based on a H323 MCU. This service has steadily grown up since then, serving other French communities than the HEP one, to reach an average of about 30 different conferences a day. The relatively small amount of manpower that has been devoted to this project can be...
Prof.
Martin Sevior
(University of Melbourne)
24/03/2009, 10:00
The SuperBelle project to increase the Luminosity of the KEKB collider
by a factor 50 will search for Physics beyond the Standard Model through
precision measurements and the investigation of rare processes in
Flavour Physics. The data rate expected from the experiment is
comparable to a current era LHC experiment with commensurate Computing
needs. Incorporating commercial cloud...
Dr
Steve Pawlowski
(Intel)
24/03/2009, 12:00
Today’s processors designs have some significant challenges in the coming years. Compute demands are projected to continue to grow at a compound aggregate
growth rate of 45% per year, with seemingly no end in sight. Also, energy as well as property, plant and equipment costs continue to increase as well. Processor designers can no longer afford to trade off increasing power for increasing...
Mr
Jose Benito Gonzalez Lopez
(CERN)
24/03/2009, 14:00
While the remote collaboration services at CERN slowly aggregate around the Indico event management software, its new version which is the result of a careful maturation process includes improvements which will set a new reference in its domain. The presentation will focus on the description of the new features of the tool, the user feedback process which resulted in a new record of usability....
Dr
Georg Weidenspointner
(MPE and MPI-HLL , Munich, Germany)
24/03/2009, 14:00
The production of particle induced X-ray emission (PIXE) resulting from the de-excitation of an ionized atom is an important physical effect that is not yet accurately modelled in Geant4, nor in other general-purpose Monte Carlo systems. Its simulation concerns use cases in various physics domains – from precision evaluation of spatial energy deposit patterns to material analysis, low...
Mrs
Andrew Hanushevsky
(SLAC National Accelerator Laboratory)
24/03/2009, 14:00
Scalla (also known as xrootd) is quickly becoming a significant part of LHC data analysis as a stand-alone clustered data server (US Atlas T2 and CERN Analysis Farm), globally clustered data sharing framework (ALICE), and an integral part of PROOF-base analysis (multiple experiments). Until recently, xrootd did not fit well in the LHC Grid infrastructure as a Storage Element (SE) largely...
Dr
Philippe Trautmann
(Sun Microsystems)
24/03/2009, 14:00
Dr
Steven Goldfarb
(University of Michigan)
24/03/2009, 14:20
I report major progress in the field of Collaborative Tools, concerning the organization, design and deployment of facilities at CERN, in support of the LHC. This presentation discusses important steps made during the past year and a half, including the identification of resources for equipment and manpower, the development of a competent team of experts, tightening of the user-feedback loop,...
Dr
Sergey Panitkin
(Department of Physics - Brookhaven National Laboratory (BNL))
24/03/2009, 14:20
The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data.
PROOF can be configured to work with centralized storage systems, but it is especially effective together with distributed local storage systems - like Xrootd, when data are distributed over computing nodes.
It works efficiently on...
Oliver Oberst
(Karlsruhe Institute of Technology)
24/03/2009, 14:20
Todays experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements....
Sunanda Banerjee
(Fermilab)
24/03/2009, 14:20
Geant4 provides a number of physics models at intermediate energies (corresponding to incident momenta in the range 1-20 GeV/c). Recently, these models have been validated with existing data from a number of experiments: (a) inclusive proton and neutron production with a variety of beams (pi^-, pi^+, p) at different energies between 1 and 9 GeV/c on a number of nuclear targets (from beryllium...
Andreas Hinzmann
(RWTH Aachen University)
24/03/2009, 14:20
The job configuration system of the CMS experiment is based on the Python programming language. Software modules and their order of execution are both represented by Python objects. In order to investigate and verify configuration parameters and dependencies naturally appearing in modular software, CMS employs a graphical tool. This tool visualizes the configuration objects, their...
Lassi Tuura
(Northeastern University)
24/03/2009, 14:40
In the last two years the CMS experiment has commissioned a full end
to end data quality monitoring system in tandem with progress in the
detector commissioning. We present the data quality monitoring and
certification systems in place, from online data taking to delivering
certified data sets for physics analyses, release validation and offline
re-reconstruction activities at Tier-1s. We...
Prof.
Dean Nelson
(SUN)
24/03/2009, 14:40
Albert Puig Navarro
(Universidad de Barcelona),
Markus Frank
(CERN)
24/03/2009, 14:40
The LHCb experiment at the LHC accelerator at CERN will collide particle bunches at 40 MHz. After a first level of hardware trigger with output at 1 MHz, the physically interesting collisions will be selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. It consists of up to roughly 16000 CPU cores and 44TB of storage space. Although limited by...
Prof.
Leo Piilonen
(Virginia Tech)
24/03/2009, 14:40
We report on the use of the GEANT4E, the track extrapolation feature written
by Pedro Arce, in the analysis of data from Belle experiment: (1) to project
charged tracks from the tracking devices outward to the particle identification
devices, thereby assisting in the identification of the particle type of each
charged track, and (2) to project charged tracks from the tracking...
Ricardo SALGUEIRO DOMINGUES DA SILVA
(CERN)
24/03/2009, 14:40
The ramping up of available resources for LHC data analysis
at the different sites continues. Most sites are currently
running on SL(C)4. However, this operating system is already
rather old, and it is becomming difficult to get the required
hardware drivers, to get the best out of recent hardware.
A possible way out is the migration to SL(C)5 based systems
where possible, in...
Dr
Andreas Salzburger
(DESY & CERN)
24/03/2009, 15:00
With the completion of installation of the ATLAS detector in 2008 and the first days of data taking, the ATLAS collaboration is increasingly focusing on the future upgrade of the ATLAS tracking devices. Radiation damage will make it necessary to replace
the innermost silicon layer (b-layer) after about five years of operation. In addition, with future luminosity upgrades of the LHC machine...
Dr
Alessandro Di Mattia
(MSU)
24/03/2009, 15:00
ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 10^34 cm-2s-1 it will need to achieve a rejection factor of the order of 10^-7 against random proton-proton interactions, while selecting with high efficiency events that are...
Dr
Stuart Paterson
(CERN)
24/03/2009, 15:00
DIRAC, the LHCb community Grid solution, uses generic pilot jobs to obtain a virtual pool of resources for the VO community. In this way agents can request the highest priority user or production jobs from a central task queue and VO policies can be applied with full knowledge of current and previous activities. In this paper the performance of the DIRAC WMS will be presented with emphasis...
Andreas Haupt
(DESY),
Yves Kemp
(DESY)
24/03/2009, 15:00
In the framework of a broad collaboration among German particle physicists - the strategic Helmholtz Alliance "Physics a the TeraScale", an Analysis facility has been set up at DESY.The facility is intended to provide the best possible analysis infrastructure for researches of the ATLAS, CMS, LHCb and ILC experiments and also for theory researchers.
In a first part of the contribution, we...
Mr
Andrei Gheata
(CERN/ISS)
24/03/2009, 15:20
ALICE offline group has developed a set of tools that do formalize data access patterns and impose certain rules on how individual data analysis modules have to be structured in order to maximize the data processing efficiency at the whole collaboration scale. The ALICE analysis framework was developed and extensively tested on MC reconstructed data during the last 2 years in the ALICE...
Dr
Sebastien Binet
(LBNL)
24/03/2009, 15:20
Computers are no longer getting faster: instead, they are growing more and more
CPUs, each of which is no faster than the previous generation.
This increase in the number of cores evidently calls for more parallelism in
HENP software.
If end-users' stand-alone analysis applications are relatively easy to modify,
LHC experiments frameworks, being mostly written with a single 'thread'...
Dr
Isidro Gonzalez Caballero
(Instituto de Fisica de Cantabria, Grupo de Altas Energias)
24/03/2009, 15:20
In the CMS computing model, about one third of the computing resources are located at Tier-2 sites, which are distributed across the countries in the collaboration. These sites are the primary platform for user analyses; they host datasets that are created at Tier-1 sites, and users from all CMS institutes submit analysis jobs that run on those data through grid interfaces. They are also the...
Dr
Silvia Amerio
(University of Padova & INFN Padova)
24/03/2009, 15:20
The Silicon-Vertex-Trigger (SVT) is a processor developed at CDF experiment to perform online fast and precise track reconstruction. SVT is made of two pipelined processors, the Associative Memory, finding low precision tracks, and the Track Fitter, refining the track quality with high precision fits. We will describe the architecture and the performances of a next generation track fitter,...
Dr
Fabio Cossutti
(INFN Trieste)
24/03/2009, 15:20
The CMS simulation has been operational within the new CMS software
framework for more than 3 years. While the description of the
detector, in particular in the forward region, is being completed,
during the last year the emphasis of the work has been put on fine
tuning of the physics output. The existing test beam data for the
different components of the calorimetric system have been...
Olivier Martin
(Ictconsulting)
24/03/2009, 15:20
Despite many coordinated efforts to promote the use of IPv6, the migration from IPv4 is far from being up to the expectations of most Internet experts. However, time is running fast and unallocated IPv4 address space should run out within the next 3 years or so. The speaker will attempt to explain the reasons behind the lack of enthusiasm for IPv6, in particular, the lack of suitable migration...
Mr
Alexander Zaytsev
(Budker Institute of Nuclear Physics (BINP))
24/03/2009, 15:40
Hierarchy Software Development Framework provides a lightweight tool for building portable modular applications for performing automated data analysis tasks in a batch mode.
The history of design and development activities devoted to the project has begun in March 2005 and from the very beginning it was targeting the case of building experimental data processing applications for the CMD-3...
Vasco Chibante Barroso
(CERN)
24/03/2009, 15:40
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Some specific calibration tasks are performed regularly for each of the 18 ALICE sub-detectors in order to achieve most accurate physics measurements. These procedures involve events analysis in a wide...
Dr
Graeme Andrew Stewart
(University of Glasgow), Dr
Michael John Kenyon
(University of Glasgow), Dr
Samuel Skipsey
(University of Glasgow)
24/03/2009, 15:40
ScotGrid is a distributed Tier-2 centre in the UK with sites in
Durham, Edinburgh and Glasgow. ScotGrid has undergone a huge expansion
in hardware in anticipation of the LHC and now provides more than
4MSI2K and 500TB to the LHC VOs.
Scaling up to this level of provision has brought many challenges to
the Tier-2 and we show in this paper how we have adopted new methods
of organising...
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
24/03/2009, 15:40
Grids enable uniform access to resources by implementing standard interfaces to resource gateways. Gateways control access privileges to resources using user's identify and personal attributes, which are available through Grid credentials. Typically, Gateways implement access control by mapping Grid credentials to local privileges.
In the Open Science Grid (OSG), privileges are granted on...
Dr
Giacinto Donvito
(INFN-Bari)
24/03/2009, 16:30
The Job Submitting Tool provides a solution for the submission of a large number of jobs to the grid in an unattended way. Indeed the tool is able to manage the grid submission, bookkeeping and resubmission of failed jobs .
It also allows the monitor in real time of the status of each job using the same framework.
The key elements of this tool are:
A Relational Db that contains all the...
Marco Clemencic
(European Organization for Nuclear Research (CERN))
24/03/2009, 16:30
After ten years from its first version, the Gaudi software framework underwent many changes and improvements with a subsequent increased of the code base. Those changes were almost always introduced preserving the backward compatibility and reducing as much as possible changes in the framework itself; obsolete code has been removed only rarely. After a release of Gaudi targeted to the...
Dr
Sergey Panitkin
(Department of Physics - Brookhaven National Laboratory (BNL))
24/03/2009, 16:30
Solid State Drives (SSD) is a very promising storage technology for High Energy Physics parallel analysis farms.
Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which...
Andrea Ceccanti
(INFN CNAF, Bologna, Italy),
Tanya Levshina
(FERMI NATIONAL ACCELERATOR LABORATORY)
24/03/2009, 16:30
The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs).
The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject...
Dr
Mark Sutton
(University of Sheffield)
24/03/2009, 16:50
The ATLAS experiment is one of two general-purpose experiments at the Large Hadron Collider (LHC). It has a three-level trigger, designed to reduce the 40MHz bunch-crossing rate to about 200Hz for recording. Online track
reconstruction, an essential ingredient to achieve this design goal, is performed at the software-based second (L2) and third levels (Event Filter, EF), running on farms of...
Mr
Frank van Lingen
(California Institute of Technology), Mr
Stuart Wakefield
(Imperial College)
24/03/2009, 16:50
Three different projects within CMS produce various workflow related data products: CRAB (analysis centric), ProdAgent (simulation production centric), T0 (real time sorting and reconstruction of real events). Although their data products and workflows are different, they all deal with job life cycle management (creation, submission, tracking, and cleanup of jobs). WMCore provides a set of...
Mr
Rune Sjoen
(Bergen University College)
24/03/2009, 16:50
The ATLAS data network interconnects up to 2000 processors using up to
200 edge switches and five multi-blade chassis devices. Classical,
SNMP-based, network monitoring provides statistics on aggregate traffic,
but something else is needed to be able to quantify single traffic
flows.
sFlow is an industry standard which enables an Ethernet switch to take a
sample of the packets...
Eduardo Rodrigues Figueiredo
(University of Glasgow),
Manuel Schiller
(Universität Heidelberg)
24/03/2009, 16:50
The LHCb Tracking system consists of four major sub-detectors
and a dedicated magnet. A sequence of algorithms have been
developed to optimally exploit the capability of all tracking
sub-detects. Different configurations of the same algorithms
are used to reconstruct tracks at various stages of the trigger
system, in the standard offline pattern recognition and under
initial conditions...
Andrea Ceccanti
(CNAF - INFN),
John White White
(Helsinki Institute of Physics HIP)
24/03/2009, 16:50
The new authorization service of the gLite middleware stack is presented.
In the EGEE-II project, the overall authorization study and review gave
recommendations that the authorization should be rationalized throughout
the middleware stack. As per the accepted recommendations, the new
authorization service is designed to focus on EGEE gLite computational
components: WMS, CREAM, and...
Belmiro Pinto
(Universidade de Lisboa)
24/03/2009, 17:10
The ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model, which embraces Grid concepts. The output coming from the Event Filter will consist of three main streams: a primary stream, the express stream and...
Peter Onyisi
(University of Chicago)
24/03/2009, 17:10
The ATLAS experiment at the Large Hadron Collider reads out 100 Million
electronic channels at a rate of 200 Hz.
Before the data are shipped to storage and analysis centres across the
world, they have to be checked to be free from irregularities which
render them scientifically useless. Data quality offline monitoring
provides prompt feedback from full first-pass event reconstruction...
Maria Assunta Borgia
(Unknown)
24/03/2009, 17:10
The CMS Silicon Strip Tracker (SST), consisting of more than 10 millions of channels, is organized in about 16,000 detector modules and it is the largest silicon strip tracker ever built for high energy physics experiments. The Data Quality Monitoring system for the Tracker has been developed within the CMS Software framework. More than 100.000 monitorable quantities need to be managed by the...
Dr
Jason Smith
(Brookhaven National Laboratory), Ms
Mizuki Karasawa
(Brookhaven National Laboratory)
24/03/2009, 17:30
The RACF provides computing support to a broad spectrum of scientific
programs at Brookhaven. The continuing growth of the facility, the diverse
needs of the scientific programs and the increasingly prominent role of
distributed computing requires the RACF to change from a system to a
service-based SLA with our user communities.
A service-based SLA allows the RACF to coordinate more...
Dr
Dantong Yu
(BROOKHAVEN NATIONAL LABORATORY)
24/03/2009, 17:30
PanDA, ATLAS Production and Distributed Analysis framework, has been identified as one of the most important services provided by the ATLAS Tier 1 facility at Brookhaven National Laboratory (BNL), and enhanced to what is now a 24x7x365 production system. During this period, PanDA has remained under active development for additional functionalities and bug fix, and processing requirements have...
Dr
Simone Pagan Griso
(University and INFN Padova)
24/03/2009, 17:30
Large international collaborations that use de-centralized computing
models are becoming a custom rather than an exception in High Energy Physics.
A good computing model for such big and spread collaborations has to
deal with the distribution of the experiment-specific software around the world.
When the CDF experiment developed its software infrastructure,
most computing was done on...
Martin Woudstra
(University of Massachusetts)
24/03/2009, 17:30
The Muon Spectrometer for the ATLAS experiment at the LHC is
designed to identify muons with transverse momentum greater
than 3 GeV/c and measure muon momenta with high precision up
to the highest momenta expected at the LHC. The 50-micron sagitta
resolution translates into a transverse momentum resolution of 10%
for muon transverse momenta of 1 TeV/c.
The design resolution requires an...
Zachary Miller
(University of Wisconsin)
24/03/2009, 17:30
Many secure communication libraries used by distributed systems, such as SSL,
TLS, and Kerberos, fail to make a clear distinction between the authentication,
session, and communication layers. In this paper we introduce CEDAR, the secure
communication library used by the Condor High Throughput Computing software,
and present the advantages to a distributed computing system resulting...
Mr
Pablo Martinez Ruiz Del Arbol
(Instituto de Física de Cantabria)
24/03/2009, 17:30
The alignment of the Muon System of CMS is performed using different techniques: photogrammetry measurements, optical alignment and alignment with tracks. For track-based alignment, several methods are employed, ranging from a hit-impact point (HIP) algorithm and a procedure exploiting chamber overlaps to a global fit method based on the Millepede approach. For start-up alignment, cosmic muon...
Dr
Arno Straessner
(IKTP, TU Dresden), Dr
Matthias Schott
(CERN)
24/03/2009, 17:50
The determination of the ATLAS detector performance in data is
essential for all physics analyses and even more important to
understand the detector during the first data taking period. Hence a
common framework for the performance determination provides a useful
and important tool for various applications.
We report on the implementation of a performance tool with common
software...
Johannes Elmsheuser
(Ludwig-Maximilians-Universität München)
24/03/2009, 17:50
The distributed data analysis using Grid resources is one of the
fundamental applications in high energy physics to be addressed
and realized before the start of LHC data taking. The needs to
manage the resources are very high. In every experiment up to a
thousand physicist will be submitting analysis jobs into the Grid.
Appropriate user interfaces and helper applications have to be...
Jean-Christophe Garnier
(Conseil Europeen Recherche Nucl. (CERN)-Unknown-Unknown)
24/03/2009, 17:50
The High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are consolidated into files on an onsite storage and then sent to permanent storage for subsequent analysis on the Grid. For local and full-chain tests a method to exercise the data-flow through the High Level Trigger when there are no actual data is needed....
Mr
Christopher Hollowell
(Brookhaven National Laboratory), Mr
Robert Petkus
(Brookhaven National Laboratory)
24/03/2009, 17:50
The RHIC/ATLAS Computing Facility (RACF) processor farm at Brookhaven
National Laboratory currently provides over 7200 cpu cores (over 13 million
SpecInt2000 of processing power) for computation. Our ability to supply this
level of computational capacity in a data-center limited by physical space,
cooling and electrical power is primarily due to the availability of increasingly
dense...
Dr
Josva Kleist
(Nordic Data Grid Facility)
24/03/2009, 18:10
The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF)
differs significantly from other Tier-1s in several aspects: It is not
located at one or a few locations but instead distributed throughout the
Nordic, it is not under the governance of a single organisation but
but is instead build from resources under the control of
a number of different national organisations.
Being...
David Gonzalez Maline
(CERN)
24/03/2009, 18:10
ROOT, as a scientific data analysis framework, provides extensive capabilities
via graphics user interfaces (GUI) for performing interactive analysis and
visualize data objects like histograms and graphs. A new interface for fitting
has been developed for performing, exploring and comparing fits on data point
sets such as histograms, multi-dimensional graphs or trees.
With this new...
Bjoern Hallvard Samset
(Fysisk institutt - University of Oslo)
24/03/2009, 18:10
A significant amount of the computing resources available to the ATLAS experiment at the LHC are connected via the ARC grid middleware. ATLAS ARC-enabled resources, which consist of both major computing centers at Tier-1 level and lesser, local clusters at Tier-2 and 3 level, have shown excellent performance running heavy Monte Carlo (MC) production for the experiment. However, with the...
Mr
Bjorn (on behalf of the ATLAS Tile
Calorimeter system) Nordkvist
(Stockholm University)
24/03/2009, 18:10
The ATLAS Tile Calorimeter is ready for data taking during the
proton-proton collisions provided by the Large Hadron Collider (LHC). The
Tile Calorimeter is a sampling calorimeter with iron absorbers and
scintillators as active medium. The scintillators are read out by wave
length shifting fibers and PMTs. The LHC provides collisions every 25ns,
putting very stringent requirements on the...
Mine Altunay
(FERMI NATIONAL ACCELERATOR LABORATORY)
25/03/2009, 09:30
Grid Security and Identity Management
Prof.
Markus Elsing
(CERN)
25/03/2009, 11:30
After more than a decade of software development the LHC experiments have
successfully released their offline software for the commissioning with
data. Sophisticated detector description models are necessary to match the
physics requirements on the simulation, while fast geometries are in use to
speed up the high level trigger and offline track reconstruction. The
experiments explore...
Dr
Dantong Yu
(BROOKHAVEN NATIONAL LABORATORY)
26/03/2009, 08:00
The TeraPaths, Lambda Station, and Phoebus projects were funded by the Department Of Energy's (DOE) network research program to support efficient, predictable, prioritized petascale data replication in modern high-speed networks, directly address the "last-mile" problem between local computing resources and WAN paths, and provide interfaces to modern, high performance hybrid networks with low...
Dr
Iosif Legrand
(CALTECH)
26/03/2009, 08:00
To satisfy the demands of data intensive applications it is necessary to move to far more synergetic relationships between data transfer applications and the network infrastructure. The main objective of the High Performance Data Transfer Service we present is to effectively use the available network infrastructure capacity and to coordinate, manage and control large data transfer tasks...
Dr
Wenji Wu
(Fermi National Accelerator Laboratory)
26/03/2009, 08:00
Distributed petascale computing involves analysis of massive data sets in a large-scale cluster computing environment. Its major concern is to efficiently and rapidly move the data sets to the computation and send results back to users or storage. However, the needed efficiency of data movement has hardly been achieved in practice. Present cluster operating systems usually are general-purpose...
Dr
Gabriele Compostella
(CNAF INFN), Dr
Manoj Kumar Jha
(INFN Bologna)
26/03/2009, 08:00
Being a large international collaboration established well before the
full development of the Grid as the main computing tool for High
Energy Physics, CDF has recently changed and improved its computing model, decentralizing some parts of it in order to be able to exploit the rising number of distributed resources available nowadays.
Despite those efforts, while the large majority of CDF...
Stefano Bagnasco
(INFN Torino)
26/03/2009, 08:00
Current Grid deployments for LHC computing (namely the WLCG infrastructure) do not allow efficient parallel interactive processing of data. In order to allow physicists to interactively access subsets of data (e.g. for algorithm tuning and debugging before running over a full dataset) parallel Analysis Facilities based on PROOF have been deployed by the ALICE experiment at CERN and elsewhere....
Mr
Roland Moser
(CERN and Technical University of Vienna)
26/03/2009, 08:00
The CMS Data Acquisition System consists of O(1000) of interdependent services. A monitoring system providing exception and application-specific data is essential for the operation of this cluster.
Due to the number of involved services the amount of monitoring data is higher than a human operator can handle efficiently. Thus moving the expert-knowledge for error analysis from the operator to...
Mr
Mario Lassnig
(CERN & University of Innsbruck)
26/03/2009, 08:00
Unrestricted user behaviour is becoming one of the most critical properties in data intensive supercomputing. While policies can help to maintain a usable environment in clearly directed cases, it is important to know how users interact with the system so that it can be adapted dynamically, automatically and timely.
We present a statistical and generative model that can replicate and simulate...
Dr
Andrew Stephen McGough
(Imperial College London)
26/03/2009, 08:00
The Grid as an environment for large-scale job execution is now moving beyond the prototyping phase to real deployments on national and international scales providing real computational cycles to application scientists. As the Grid move into production, characteristics about how users are exploiting the resources and how the resources are coping with production load are essential in...
Dr
Wainer Vandelli
(Conseil Europeen Recherche Nucl. (CERN))
26/03/2009, 08:00
The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently...
Raquel Pezoa Rivera
(Univ. Tecnica Federico Santa Maria (UTFSM))
26/03/2009, 08:00
The ATLAS Distributed Computing system provides a set of tools and libraries enabling data movement, processing and analysis on a grid environment. While reaching a state of maturity high enough for real data taking, it became clear that one component was missing exposing consistent information regarding site topology, service and resource information from all three distinct ATLAS grids (EGEE,...
Denis Oliveira Damazio
(Brookhaven National Laboratory)
26/03/2009, 08:00
The ATLAS detector is undergoing intense commissioning effort with
cosmic rays preparing for the first LHC colisions next spring. Combined
runs with all of the ATLAS subsystems are being taken in order to evaluate
the detector performance. This is an unique opportunity also for the trigger
system to be studied with different detector operation modes, such as
different event rates and...
Dr
Luca Fiorini
(IFAE Barcelona)
26/03/2009, 08:00
TileCal is the barrel hadronic calorimeter of the ATLAS experiment presently in an advanced state of commissioning with cosmic and single beam data at the LHC accelerator.
The complexity of the experiment, the number of electronics channels and the high rate of acquired events requires a systematic strategy of the System Preparation for the Data Taking.
This is done through a precise...
Mr
Costin Grigoras
(CERN)
26/03/2009, 08:00
A complex software environment such as the ALICE Computing Grid infrastructure requires permanent control and management for the large set of services involved. Automating control procedures reduces the human interaction with the various components of the system and yields better availability of the overall system. In this paper we will present how we used the MonALISA framework to gather,...
Hongyu ZHANG
(Experimental Physics Center, Experimental Physics Center, Chinese Academy of Sciences, Beijing, China)
26/03/2009, 08:00
BEPCII is designed with a peak luminosity of 1033cm-2sec-1. After the Level 1 trigger, the event rate is estimated to be around 4000Hz at J/ψ peak. A pipelined front-end electronic system is designed and developed and the BESIII DAQ system is accomplished to satisfy the requirement of event readout and processing with such a high event rate.
BESIII DAQ system consists of about 100 high...
Riccardo Zappi
(INFN-CNAF)
26/03/2009, 08:00
In the storage model adopted by WLCG, the quality of service for a storage capacity provided by an SRM-based service is described by the concept of Storage Class. In this context, two parameters are relevant: the Retention Policy and the Access Latency. With the advent of cloud-based resources, virtualized storage capabilities are available like the Amazon Simple Storage Service (Amazon S3)....
Dr
Daniele Bonacorsi
(CMS experiment / INFN-CNAF, Bologna, Italy)
26/03/2009, 08:00
During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this world-wide exercise was to check the readiness of the computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also...
Dr
Timm Steinbeck
(Institute of Physics)
26/03/2009, 08:00
For the ALICE heavy-ion experiment a large cluster will be used to
perform the last triggering stages in the High Level Trigger. For the
first year of operation the cluster consists of about 100 SMP nodes
with 4 or 8 CPU cores each, to be increased to more than 1000 nodes
for the later years of operation. During the commissioning phases of
the detector, the preparations for first LHC...
Dr
Volker Friese
(GSI Darmstadt)
26/03/2009, 08:00
The Compressed Baryonic Matter experiment (CBM) is one of the core experiments to be operated at the future FAIR accelerator complex in Darmstadt, Germany, from 2014 on. It will investigate heavy-ion collisions at moderate beam energies but extreme interaction rates, which give access to extremely rare probes such as open charm or charmonium decays near the production threshold.
The high...
Daniel Charles Bradley
(High Energy Physics)
26/03/2009, 08:00
A number of recent enhancements to the Condor batch system have been stimulated by the challenges of LHC computing. The result is a more robust, scalable, and flexible computing platform. One product of this effort is the Condor JobRouter, which serves as a high-throughput scheduler for feeding multiple (e.g. grid) queues from a single input job queue. We describe its principles and how it...
Vardan Gyurjyan
(JEFFERSON LAB)
26/03/2009, 08:00
The ever growing heterogeneity of physics experiment control systems presents a real challenge to uniformly describe control system components and their operational details. Control Oriented Ontology Language (COOL) is an experiment control meta-data modeling language that provides a generic means for concise and uniform representation of physics experiment control processes and components,...
Xavier Mol
(Forschungszentrum Karlsruhe)
26/03/2009, 08:00
D-Grid is the German initiative for building a national computing grid. When its customers want to work within the German grid, they need dedicated software, called ‘middleware’. As D-Grid site administrators are free to choose their middleware according to the needs of their users, the project ‘DGI (D-Grid Integration) reference installation’ was launched. Its purpose is to assist the site...
Mr
Antonio Delgado Peris
(CIEMAT)
26/03/2009, 08:00
Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time.
Our work reviews the policies that have...
Peter Onyisi
(University of Chicago)
26/03/2009, 08:00
At the ATLAS experiment, the Detector Control System (DCS) is used to
oversee detector conditions and supervise the running of equipment.
It is essential that information from the DCS about the status of
individual sub-detectors be extracted and taken into account when
determining the quality of data taken and its suitability for different
analyses.
DCS information is written online to...
Dr
Hiroyuki Matsunaga
(ICEPP, University of Tokyo)
26/03/2009, 08:00
A Tier-2 regional center is running at the University of Tokyo in Japan.
This center receives a large amount of data of the ATLAS experiment
from the Tier-1 center in France. Although the link between the two centers
has 10Gbps bandwidth, it is not a dedicated link but is shared with
other traffic, and the round trip time is 280msec. It is not easy
to exploit the available bandwidth...
Mr
Vladlen Timciuc
(California Institute of Technology)
26/03/2009, 08:00
The CMS detector at LHC is equipped with a high precision electromagnetic crystal calorimeter (ECAL). The crystals experience a transparency change when exposed to radiation during LHC operation, which recovers in absents of irradiation on the time scale of hours. This change of the crystal response is monitored with a laser system which performs a transparency measurement of each crystal of...
Dr
Silke Halstenberg
(Karlsruhe Institute of Technology)
26/03/2009, 08:00
The dCache installation at GridKa, the German Tier-1 center, is ready for LHC data taking. After years of tuning and dry runs, several software and operational bottlenecks have been identified.
This contribution describes several procedures to improve stability and reliability of the Tier-1 storage setup. These range from redundant hardware and disaster planning over fine grained monitoring...
Mr
Tigran Mkrtchyan Mkrtchyan
(Deutsches Elektronen-Synchrotron DESY)
26/03/2009, 08:00
Starting spring 2009, all WLCG data management services have to be ready and prepared to move terabytes of data from CERN to the Tier 1 centers world wide, and from the Tier 1s to their corresponding Tier 2s. Reliable file transfer services, like FTS, on top of the SRM v2.2 protocol are playing a major role in this game. Nevertheless, moving large junks of data is only part of the challenge....
Dr
James Letts
(Department of Physics-Univ. of California at San Diego (UCSD))
26/03/2009, 08:00
The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests was designed and deployed to equip the WLCG Tiers which support the CMS Virtual Organization with a means for debugging, load-testing and commissioning data transfer routes among CMS Computing Centres....
Dr
Sergio Andreozzi
(INFN-CNAF)
26/03/2009, 08:00
The GLUE 2.0 specification is an upcoming OGF specification for standard-based Grid resource characterization to support functionalities such as discovery, selection and monitoring.
An XML Schema realization of GLUE 2.0 is available, nevertheless, Grids still lack a standard information service interface. Therefore, there is no uniform agreed solution to expose resource descriptions.
On...
Dr
Vincenzo Spinoso
(INFN, Bari)
26/03/2009, 08:00
Together with the start of LHC, high-energy physics researchers will start massive usage of LHC Tier2s. It is essential to supply physics user groups with a simple and intuitive “user-level” summary of their associated T2 services’ status, showing for example available, busy and unavailable resources. At the same time, site administrators need “technical level” monitoring, namely a view of...
Gabriel Caillat
(LAL, Univ. Paris Sud, IN2P3/CNRS)
26/03/2009, 08:00
Desktop grids, such as XtremWeb and BOINC, and service grids, such as EGEE, are two different approaches for science communities to gather computing power from a large number of computing resources. Nevertheless, little work has been done to combine these two Grid technologies in order to establish a seamless and vast grid resource pool. In this paper we present the EGEE service grid, the...
Mr
Michal ZEROLA
(Nuclear Physics Inst., Academy of Sciences)
26/03/2009, 08:00
For the past decade, HENP experiments have been heading towards a distributed computing model in an effort to concurrently process tasks over enormous data sets that have been increasing in size as a function of time. In order to optimize all available resources (geographically spread) and minimize the processing time, it is necessary to face also the question of efficient data transfers and...
Dr
Simone Campana
(CERN/IT/GS)
26/03/2009, 08:00
The ATLAS Experiment at CERN developed an automated system for data distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG service infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a...
Dr
Chadwick Keith
(Fermilab)
26/03/2009, 08:00
Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The architecture of FermiGrid facilitates seamless interoperation of the multiple heterogeneous Fermilab resources with the resources of the other...
Dr
Armin Scheurer
(Karlsruhe Institute of Technology)
26/03/2009, 08:00
The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's...
Mr
Philippe Canal
(Fermilab)
26/03/2009, 08:00
The Open Science Grid's usage accounting solution is a system known as, "Gratia." Now that it has been deployed successfully the Open Science Grid's next accounting challenge is to correctly interpret and make the best possible use of the information collected. One such issue is, "Did we use and/or get credit for, the resource we think we used?" Another example is the problem of ensuring that...
Mr
David Collados Polidura
(CERN)
26/03/2009, 08:00
The Worldwide LHC Computing Grid (WLCG) is based on a four-tiered model that comprises collaborating resources from different grid infrastructures such as EGEE and OSG. While grid middleware provides core services on variety of platforms, monitoring tools like Gridview, SAM, Dashboards and GStat are being used for monitoring, visualization and evaluation of the WLCG infrastructure.
The...
Gyoergy Vesztergombi
(Res. Inst. Particle & Nucl. Phys. - Hungarian Academy of Science)
26/03/2009, 08:00
Unusually high intensity ( 10**11 proton/sec ) beam is planned to be ejected for fixed targets at FAIR accelerator upto 90 GeV energy. Using this beam the FAIR-CBM experiment provides an unique high luminosity facility to measure high pT phenomena with unprecedented sensitivity exceeding by orders of magnitude that of previous experiments.
Applying 1% target the expected minimum bias event...
Dr
Christopher Jung
(Forschungszentrum Karlsruhe)
26/03/2009, 08:00
Most Tier-1 centers of LHC Computing Grid are using dCache as their storage system. dCache uses a cost model incorporating CPU and space costs for the distribution of data on its disk pools.
Storage resources at Tier-1 centers are usually upgraded once or twice a year according to given milestones. One of the effects of this procedure is the accumulation of heterogeneous hardware resources....
Timur Perelmutov
(FERMI NATIONAL ACCELERATOR LABORATORY)
26/03/2009, 08:00
The dCache disk caching file system has been chosen by a majority of LHC Experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. In preparation for the LHC startup, very large installations of dCache - up to 3 Petabytes of disk - have already been deployed, and the systems have operated at transfer rates exceeding 2000 MB/s over the WAN. As the LHC...
Ms
Giulia Taurelli
(CERN)
26/03/2009, 08:00
HSM systems such as the CERN’s Advanced STORage manager (CASTOR) [1] are responsible for storing Petabytes of data which is first cached on disk and then persistently stored on tape media.
The contents of these tapes are regularly repacked from older, lower-density media to new-generation, higher-density media in order to free up physical space and ensure long term data integrity and...
Mr
laurence field
(cern)
26/03/2009, 08:00
Author: Laurence Field, Markus Schulz, Felix Ehm, Tim Dyce
Grid Information Systems are mission-critical components in todays production grid infrastructures. They enable users, applications and services to discover which services exists in the infrastructure and further information about the service structure and state.
As the Grid Information System is pervasive throughout the...
Dr
Tony Wildish
(PRINCETON)
26/03/2009, 08:00
PhEDEx, the CMS data- placement system, uses the FTS service to transfer files. Towards the end of 2007 PhEDEx was beginning to show some serious scaling issues, with excessive numbers of processes on the site VOBOX running PhEDEx, poor efficiency in use of FTS job-slots, high latency for failure-retries, and other problems. The core PhEDEx architecture was changed in May 2008 to eliminate...
Dr
Sergey Linev
(GSI Darmstadt)
26/03/2009, 08:00
New experiments at FAIR like CBM require new concepts of data acquisition systems, where instead of central trigger self-triggered electronics with time-stamped readout should be used. A first prototype of such a system was implemented in form of a CBM readout controller (ROC) board, which is designed to read time-stamped data from a front-end board equipped with nXYTER chips and transfer that...
Daniel Bradley
(University of Wisconsin)
26/03/2009, 08:00
Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on...
Sergey Kalinin
(Universite Catholique de Louvain)
26/03/2009, 08:00
As the Large Hadron Collider (LHC) at CERN, Geneva, has begun operation in
September, the large scale computing grid LCG (LHC Computing Grid) is meant
to process and store the large amount of data created in simulating,
measuring and analyzing of particle physic experimental data. Data acquired
by ATLAS, one of the four big experiments at the LHC, are analyzed using
compute jobs running...
Lev Shamardin
(Scobeltsyn Institute of Nuclear Physics, Moscow State University (SINP MSU))
26/03/2009, 08:00
Grid systems are used for calculations and data processing in various applied
areas such as biomedicine, nanotechnology and materials science, cosmophysics
and high energy physics as well as in a number of industrial and commercial
areas. Traditional method of execution of jobs in grid is running jobs directly
on the cluster nodes. This limits the choice of the operational environment...
Somogyi Peter
(Technical University of Budapest)
26/03/2009, 08:00
LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality.
Spotting faulty components can be...
Dr
Andrea Chierici
(INFN-CNAF)
26/03/2009, 08:00
Quattor is a system administration toolkit providing a powerful, portable, and modular set of tools for the automated installation, configuration, and management of clusters and farms. It is developed as a community effort and provided as open-source software. Today, quattor is being used to manage at least 10 separate infrastructures spread across Europe. These range from massive single-site...
Mr
Adolfo Vazquez
(Universidad Complutense de Madrid)
26/03/2009, 08:00
The MAGIC telescope, a 17-meterCherenkov telescope located on La Palma (Canary Islands), is dedicated to the study of the universe in Very High Energy gamma-rays. These particles arrive at the Earth's atmosphere producing atmospheric showers of secondary particles that can be detected on ground through their Cherenkov radiation. MAGIC relies on a large number of Monte Carlo simulations for the...
Jeremiah Jet Goodson
(Department of Physics - State University of New York (SUNY))
26/03/2009, 08:00
The ATLAS detector at the Large Hadron Collider is expected to collect an unprecedented wealth of new data at a completely new energy scale. In particular its Liquid Argon electromagnetic and hadronic calorimeters will play an essential role in measuring final states with electrons and photons and in contributing to the measurement of jets and missing transverse energy. Efficient monitoring...
Dr
Raja Nandakumar
(Rutherford Appleton Laboratory)
26/03/2009, 08:00
DIRAC, the LHCb community Grid solution, is intended to reliably run large data mining activities. The DIRAC system consists of various services (which wait to be contacted to perform actions) and agents (which carry out periodic activities) to direct jobs as required. An important part of ensuring the reliability of the infrastructure is the monitoring and logging of these DIRAC distributed...
Mr
Daniel Filipe Rocha Da Cunha Rodrigues
(CERN)
26/03/2009, 08:00
The MSG (Messaging System for the Grid) is a set of tools that make a Message Oriented platform available for communication between grid monitoring components. It has been designed specifically to work with the EGEE operational tools and acts as an integration platform to improve the reliability and scalability of the existing operational services. MSG is a core component as WLCG monitoring...
Mr
Andrey Bobyshev
(FERMILAB)
26/03/2009, 08:00
There are a number of active projects to design and develop a data control plane capability that steers traffic onto alternate network paths, instead of the default path provided though standard IP connectivity. Lambda Station, developed by Fermilab and Caltech, is one example of such solution, and is currently deployed at US CMS Tier1 facility at Fermilab and various Tier2 sites.
When the...
Vakhtang Tsiskaridze
(Tbilisi State University, Georgia)
26/03/2009, 08:00
At this moment, at 100 KHz frequency, in the Tile Calorimeter ROD DSP using Optimal Filtering Reconstruction method Amplitude, Time and Quality Factor (QF) parameters are calculated. If QF is good enough only Amplitude, Time and QF are stored, otherwise the data quality is considered bad and it is proposed to store raw data for further studies. Without any compression, bandwidth limitation...
Daniele Cesini
(INFN CNAF)
26/03/2009, 08:00
The Workload Management System is the gLite service supporting the distributed production and analysis activities of various HEP experiments. It is responsible of dispatching computing jobs to remote computing facilities by matching job requirements and the resource status information collected from the Grid information services. Given the distributed and heterogeneous nature of the Grid, the...
Chendong FU
(IHEP, Beijing)
26/03/2009, 08:00
BEPCII is the electron-positron collider with the highest
luminosity at tau-charm energy region and BESIII is the corresponding
detector with greatly improve detection capacity. For the accelerator and
detector, the event tigger is rathe high. In order to reduce the background
level and the recorder burden of computers, the online event filtering
algorithm is established. Such an...
Dr
Greig Cowan
(University of Edinburgh)
26/03/2009, 08:00
The ScotGrid distributed Tier-2 now provides more that 4MSI2K and 500TB for LHC computing, which is spread across three sites at Durham, Edinburgh and Glasgow.
Tier-2 sites have a dual role to play in the computing models of the LHC VOs. Firstly, their CPU resources are used for the generation of Monte Carlo event data. Secondly, the end user analysis object data is distributed to the site...
Dr
Jose Antonio Coarasa Perez
(Department of Physics - Univ. of California at San Diego (UCSD))
26/03/2009, 08:00
The Open Science Grid middleware stack has seen intensive development over the past years and has become more and more mature, as increasing numbers of sites have been successfully added to the infrastructure. Considerable effort has been put into consolidating this infrastructure and enabling it to provide a high degree of scalability, reliability and usability. A thorough evaluation of its...
Dr
Max Böhm
(EDS / CERN openlab)
26/03/2009, 08:00
GridMap (http://gridmap.cern.ch) has been introduced to the community at the EGEE'07 conference as a new monitoring tool that provides better visualization and insight to the state of the Grid than previous tools. Since then it has become quite popular in the grid community. Its 2 dimensional graphical visualization technique based on treemaps, coupled with a simple responsive AJAX based rich...
Dr
Maxim Potekhin
(BROOKHAVEN NATIONAL LABORATORY)
26/03/2009, 08:00
The Panda Workload Management System is designed around the concept of the Pilot Job - a "smart wrapper" for the payload executable, that can probe the
environment on the remote worker node before pulling down the payload
from the server and executing it. Such design allows for improved logging
and monitoring capabilities as well as flexibility in Workload Management.
In the Grid...
Dr
Ricardo Graciani Diaz
(Universitat de Barcelona)
26/03/2009, 08:00
DIRAC, the LHCb community Grid solution, has pioneered the use of pilot jobs in the Grid. Pilot jobs provide a homogeneous interface to an heterogeneous set of computing resources. At the same time, pilot jobs allow to delay the scheduling decision to the last moment, thus taking into account the precise running conditions at the resource and last moment requests to the system.
The DIRAC...
Dr
Marie-Christine Sawley
(ETHZ)
26/03/2009, 08:00
Resource tracking, like usage monitoring, relies on fine granularity information communicated by each site on the Grid. Data is later aggregated to be analysed under different perspectives to yield global figures which will be used for decision making. The dynamic information collected from distributed sites must therefore be comprehensive, pertinent and coherent with up stream (planning) and...
Mr
Antonio Ceseracciu
(SLAC)
26/03/2009, 08:00
The Network Engineering team at the SLAC National Accelerator Laboratory is required to manage an increasing number and variety of network devices with a fixed amount of human resources. At the same time, networking equipment has acquired more intelligence to gain introspection and visibility onto the network.
Making such information readily available for network engineers and user support...
Mr
Andrey Bobyshev
(FERMILAB)
26/03/2009, 08:00
Emerging dynamic circuit services are being developed and deployed to facilitate high impact data movement within the research and education communities. These services normally require network awareness in the applications, in order to establish an end-to-end path on-demand programmatically. This approach has significant difficulties because user applications need to be modified to support...
Mr
Parag Mhashilkar
(Fermi National Accelerator Laboratory)
26/03/2009, 08:00
The Open Science Grid (OSG) offers access to hundreds of Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and...
Mr
Volker Buege
(Inst. fuer Experimentelle Kernphysik - Universitaet Karlsruhe)
26/03/2009, 08:00
An efficient administration of computing centres requires sophisticated tools for the monitoring of the local infrastructure. Sharing such resources in a grid infrastructure, like the Worldwide LHC Computing Grid (WLCG), goes ahead with a large number of external monitoring systems, offering information on the status of the services of a grid site. This huge flood of information from many...
Dr
Bohumil Franek
(Rutherford Appleton Laboratory)
26/03/2009, 08:00
In the SMI++ framework, the real world is viewed as a collection of objects
behaving as finite state machines. These objects can represent real entities,
such as hardware devices or software tasks, or they can represent abstract
subsystems. A special language (SML) is provided for the object description.
The SML description is then interpreted by a Logic Engine (coded in C++)
to drive the...
Mr
Ales Krenek
(CESNET, CZECH REPUBLIC), Mr
Jiri Sitera
(CESNET, CZECH REPUBLIC), Mr
Ludek Matyska
(CESNET, CZECH REPUBLIC), Mr
Miroslav Ruda
(CESNET, CZECH REPUBLIC), Mr
Zdenek Sustr
(CESNET, CZECH REPUBLIC)
26/03/2009, 08:00
Logging and Bookkeeping (L&B) is a gLite subsystem responsible for
tracking jobs on the grid. Normally the user interacts with it via
glite-wms-job-status and glite-wms-job-logging-info commands.
Here we present other, less generally known but still useful L&B usage
patterns which are available with recently developed L&B features.
L&B exposes a HTML interface; pointing a web browser...
Dr
Jens Jensen
(STFC-RAL)
26/03/2009, 08:00
We show how to achieve interoperation between SDSC's Storage Resource Broker (SRB) and the Storage Resource Manager (SRM) implementations used in the Large Hadron Collider Computing Grid. Interoperation is achieved using gLite tools, to demonstrate file transfers between two different grids.
This presentation is different from the work demonstrated by the authors and collaborators at SC2007...
Dr
Andreas Gellrich
(DESY)
26/03/2009, 08:00
DESY is one of the world-wide leading centers for research with particle
accelerators and synchrotron light. In HEP DESY participates in LHC as a
Tier-2 center, supports on-going analyzes of HERA data, is a leading
partner for ILC, and runs the National Analysis Facility (NAF) for LHC and
ILC. For the research with synchrotron light major new facilities are
operated and built (FLASH,...
Mr
Alexander Zaytsev
(Budker Institute of Nuclear Physics (BINP))
26/03/2009, 08:00
This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating at the LHC machine at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers,...
Vasco Chibante Barroso
(CERN)
26/03/2009, 08:00
All major experiments need tools that provide a way to keep a record of the events and activities, both during commissioning and operations. In ALICE (A Large Ion Collider Experiment) at CERN, this task is performed by the Alice Electronic Logbook (eLogbook), a custom-made application developed and maintained by the Data-Acquisition group (DAQ). Started as a statistics repository, the eLogbook...
Christian Ohm
(Department of Physics, Stockholm University)
26/03/2009, 08:00
The ATLAS BPTX stations are comprised of electrostatic button pick-up detectors, located 175 m away along the beam pipe on both sides of ATLAS. The pick-ups are installed as a part of the LHC beam instrumentation and used by ATLAS for timing purposes.
The usage of the BPTX signals in ATLAS is twofold; they are used both in the trigger system and for LHC beam monitoring. The ATLAS Trigger...
Alessandro De Salvo
(Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)
26/03/2009, 08:00
The calibration of the ATLAS MDT chambers will be performed at remote sites,
called Remote Calibration Centers. Each center will process the calibration
data for the assigned part of the detector and send the results back to CERN
for general use in the reconstruction and analysis within 24h from the
calibration data taking.
In this work we present the data extraction mechanism, the data...
Remigius K Mommsen
(FNAL, Chicago, Illinois, USA)
26/03/2009, 08:00
The CMS event builder assembles events accepted by the first level trigger
and makes them available to the high-level trigger. The event builder needs
to handle a maximum input rate of 100 kHz and an aggregated throughput of
100 GBytes/s originating from approximately 500 sources. This paper presents
the chosen hardware and software architecture. The system consists of 2
stages: an...
Dr
Jose Flix Molina
(Port d'Informació Científica, PIC (CIEMAT - IFAE - UAB), Bellaterra, Spain)
26/03/2009, 08:00
The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all...
Dr
Alessandro Di Girolamo
(CERN)
26/03/2009, 08:00
This contribution describes how part of the monitoring of the services used in the computing systems of the LHC experiments has been integrated with the Service Level Status (SLS) framework.
The LHC experiments are using an increasingly number of complex and heterogeneous services:
the SLS allows to group all these different services and to report their status and their availability by...
Dr
Doris Ressmann
(Karlsruher Institut of Technology)
26/03/2009, 08:00
All four LHC experiments are served by GridKa, the German WLCG Tier-1 at the Steinbuch Centre for Computing of the Karlsruhe Institute of Technology (KIT). Each of the experiments requires a significantly different setup of the dCache data management system. Therefore the use of a single dCache instance for all experiments can have negative effects at different levels, e.g. SRM, space manager...
Mr
Fernando Guimaraes Ferreira
(Univ. Federal do Rio de Janeiro (UFRJ))
26/03/2009, 08:00
The web system described here provides functionalities to monitor the Detector Control System (DCS) acquired data. The DCS is responsible for overseeing the coherent and safe operation of the ATLAS experiment hardware. In the context of the Hadronic Tile Calorimeter Detector, it controls the power supplies of the readout electronics acquiring voltages, currents, temperatures and coolant...
211.
Tools for offline access and visualization of ATLAS online control and data quality databases .
Mr
Lourenço Vaz
(LIP - Coimbra)
26/03/2009, 08:00
Data describing the conditions of the ATLAS detector and the Trigger and Data Acquisition system are stored in the Conditions DataBases (CDB), and may include from simple values to complex objects like online system messages or monitoring histograms. The CDB are deployed on COOL, a common infrastructure for reading and writing conditions data. Conditions data produced online are saved to an...
Dr
Josva Kleist
(Nordic Data Grid Facility)
26/03/2009, 08:00
Interoperability of grid infrastructures is becoming increasingly important in the emergence of large scale grid infrastructures based on national and regional initiatives.
To achieve interoperability of grid infrastructures adaptions and bridging of many different systems and services needs to be tackled. A grid infrastructure offers services for authentication, authorization, accounting,...
Prof.
Jorge Rodiguez
(Florida International University), Dr
Yujun Wu
(University of Florida)
26/03/2009, 08:00
The CMS experiment is expected to produce a few Peta Bytes of data a
year and distribute them globally. Within the CMS computing infrastructure,
most user analyses and the production of the Monte Carlo events will be
carried out at some 50 CMS Tier-2 sites. The way how to store the data and to
allow physicists to access them efficiently has been a challenge, especially
for Tier-2...
Torsten Antoni
(GGUS, KIT-SCC)
26/03/2009, 08:00
The user and operations support of the EGEE series of projects can be captioned "regional support with central coordination". Its central building block is the GGUS portal which acts as an entry point for users and support staff. It is also as an integration platform for the distributed support effort. As WLCG relies heavily on the EGEE infrastructure it is important that the support...
Prof.
Vincenzo Innocente
(CERN)
26/03/2009, 09:00
Computing in these years zero has been caracterized by the advent of "multicore cpus". Effective exploitation of this new kind of computing architecture requires
the adaptation of legacy software and
enventually a shift of the programming paradigms to massive parallel.
In this talk we will introduce the reasons that brough to the introduction
of "multicore" hardware and the consequencies ...
Dr
Cristinel Diaconu
(CPPM IN2P3)
26/03/2009, 10:00
The high energy physics experiments collect data over long periods of time and exploit this data to produce physics publications. The scientific potential of an experiment is in principle defined and exhausted during the collaboration lifetime. However, the continous improvement of the scientific
grounds like the theory, experiment, simulation, new ideeas or unexpected discoveries may lead to...
Dr
Harry Renshall
(CERN), Dr
Jamie Shiers
(CERN)
26/03/2009, 11:30
This talk will summarize the main points that were discussed - and where possible agreed - at the WLCG Collaboration workshop held in Prague during the weekend immediately preceding CHEP.
The list of topics for the workshop include:
* An analysis of the experience with WLCG services from 2008 data taking and processing;
* Requirements and schedule(s) for 2009;
* Readiness for 2009
Sasaki Takashi
(KEK)
26/03/2009, 12:30
Mr
Ricky Egeland
(Minnesota)
26/03/2009, 14:00
The PhEDEx Data Service provides access to information from the central PhEDEx database, as well as certificate-authenticated managerial operations such as requesting the transfer or deletion of data. The Data Service is integrated with the 'SiteDB' service for fine-grained access control, providing a safe and secure environment for operations. A plugin architecture allows server-side modules...
117.
Ring Recognition and Electron Identification in the RICH detector of the CBM Experiment at FAIR
Semen Lebedev
(GSI, Darmstadt / JINR, Dubna)
26/03/2009, 14:00
The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and...
Dr
Andrea Chierici
(INFN-CNAF)
26/03/2009, 14:20
Virtualization is a proven software technology that is rapidly transforming the IT landscape and fundamentally changing the way that people compute. Recently all major software producers (e.g. Microsoft and RedHat) developed or acquired virtualization technologies.
Our institute is a Tier1 for LHC experiments and is experiencing lots of benefits from virtualization technologies, like...
Fabrizio Furano
(Conseil Europeen Recherche Nucl. (CERN))
26/03/2009, 14:20
Performance, reliability and scalability in data access are key issues in the context of Grid computing and High Energy Physics (HEP) data analysis. We present the technical details and the results of a large scale validation and performance measurement achieved at the INFN Tier1, the central computing facility of the Italian National Institute for Nuclear Research (INFN). The aim of this work...
Dr
Johan Messchendorp (for the PANDA collaboration)
(University of Groningen)
26/03/2009, 14:20
The Panda experiment at the future facility FAIR will provide valuable data for our
present understanding of the strong interaction. In preparation for the experiments,
large-scale simulations for design and feasibility studies are performed exploiting a new
software framework, Fair/PandaROOT, which is based on ROOT and the Virtual Monte Carlo
(VMC) interface. In this paper, the various...
Dr
Janusz Martyniak
(Imperial College London)
26/03/2009, 14:20
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008.
The RTM gathers information from EGEE sites...
Mrs
Rosi REED
(University of California, Davis)
26/03/2009, 14:20
Vertex finding is an important part of accurately reconstructing events at STAR since many physics parameters, such as transverse momentum for primary particles, depend on the vertex location. Many analysis depend on trigger selection and require an accurate determination of where the interaction that fired the trigger occurred. Here we present two vertex finding methods, the Pile-Up Proof...
Dr
Kirill Prokofiev
(CERN)
26/03/2009, 14:40
In anticipation of the First LHC data to come, a considerable effort has been devoted to ensure the efficient reconstruction of vertices in the ATLAS detector. This includes the reconstruction of photon conversions, long lived particles, secondary vertices in jets as well as finding and fitting of primary vertices. The implementation of the corresponding algorithms requires a modular design...
Dr
Maria Grazia Pia
(INFN GENOVA)
26/03/2009, 14:40
Geant4 is nowadays a mature Monte Carlo system; new functionality has been extensively added to the toolkit since its first public release in 1998, nevertheless, its architectural design and software technology features have remained substantially unchanged since their original conception in the RD44 phase of the mid ‘90s.
A R&D project has been recently launched at INFN to revisit Geant4...
Dr
Andrea Bocci
(Università and INFN, Pisa)
26/03/2009, 15:00
The CMS offline software contains a widespread set of algorithms to identify jets originating from the weak decay of b-quarks. Different physical properties of b-hadron decays like lifetime information, secondary vertices and soft leptons are exploited. The variety of selection algorithms range from simple and robust ones, suitable for early data-taking and online environments such as the...
Dr
Patrick Fuhrmann
(DESY)
26/03/2009, 15:00
At the time of CHEP'09, the LHC Computing Grid approach and implementation is rapidly approaching the moment it finally has to prove its feasibility. The same is true for dCache, the grid middle-ware storage component, meant to store and manage the largest share of LHC data outside of the LHC Tier 0.
This presentation will report on the impact of recently deployed dCache sub-components,...
Dr
Mohammad Al-Turany
(GSI DARMSTADT)
26/03/2009, 15:00
FairRoot is the simulation and analysis framework used by CBM and PANDA experiments at FAIR/GSI.
The use of GPU's for event reconstruction in FairRoot will be presented. The fact that CUDA (Nvidia's Compute Unified Device Architecture) development tools work alongside the conventional C/C++ compiler, makes it possible to mix GPU code with general-purpose code for the host CPU, based on...
Ramiro Voicu
(California Institute of Technology)
26/03/2009, 15:00
USLHCNet provides transatlantic connections of the Tier1 computing facilities at Fermilab and Brookhaven with the Tier0 and Tier1 facilities at CERN as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2 and the GEANT, USLHCNet also supports connections between the Tier2 centers. The USLHCNet core infrastructure is using the Ciena Core Director devices that provide...
Wolfgang Ehrenfeld
(DESY)
26/03/2009, 15:00
The ATLAS trigger system is responsible for selecting the interesting collision events delivered by the Large Hadron Collider (LHC). The ATLAS trigger will need to achieve a ~10-7 rejection factor against random proton-proton collisions, and still be able to efficiently select interesting events. After a first processing level based on hardware, the final event selection is based on custom...
Daniel Colin Van Der Ster
(Conseil Europeen Recherche Nucl. (CERN))
26/03/2009, 15:20
Effective distributed user analysis requires a system which meets the demands of running arbitrary user applications on sites with varied configurations and availabilities. The challenge of tracking such a system requires a tool to monitor not only the functional statuses of each grid site, but also to perform large-scale analysis challenges on the ATLAS grids. This work presents one such...
Mr
Aatos Heikkinen
(Helsinki Institute of Physics)
26/03/2009, 15:20
We report our experience on using ROOT package TMVA for multivariate data analysis, for a
problem of tau tagging in the framework of heavy charged MSSM Higgs boson searches at the LHC.
With a generator level analysis, we investigate how in the ideal case tau tagging could be performed
and hadronic tau decays separated from the hadronic jets of QCD multi-jet background present in...
Dr
Hans wenzel
(Fermilab), Dr
Marian Zvada
(Fermilab)
26/03/2009, 15:20
We will present the monitoring system for the analysis farm of the CDF
experiment at the Tevatron (CAF). All monitoring data is collected in a
relational database (PostgreSQL), with SQL providing a common interface to the monitoring data.
The display of these monitoring data is done with a Web Application in form
of Java Server pages served by the Apache Tomcat server.
For the database...
Robert Quick
(Indiana University)
26/03/2009, 15:20
The Open Science Grid (OSG) Resource and Service Validation (RSV) project seeks to provide solutions for several grid fabric monitoring problems, while at the same time providing a bridge between the OSG operations and monitoring infrastructure and the WLCG (Worldwid LHC Computing Grid) infrastructure. The RSV-based OSG fabric monitoring begins with local resource fabric monitoring, which...
Dr
Brinick Simmons
(Department of Physics and Astronomy - University College London)
26/03/2009, 15:20
The ATLAS experiment's RunTimeTester (RTT) is a software testing
framework into which software package developers can plug their tests,
have them run automatically, and obtain feedback via email and the web.
The RTT processes the ATLAS nightly build releases, using acron to launch runs
on a dedicated cluster at CERN, and submitting user jobs to private LSF
batch queues. Running higher...
Heidi Schellman
(Northwestern University)
26/03/2009, 15:40
The Minerva experiment is a small fully active neutrino experiment which will run in 2010 in the NUMI beamline at Fermilab. The offline computing framework is based on the GAUDI framework. The small Minerva software development team has used the GAUDI code base to produce a functional software environment for simulation of neutrino interactions generated by the GENIE generator and analysis...
Dr
Stephen Burke
(RUTHERFORD APPLETON LABORATORY)
26/03/2009, 15:40
The GLUE information schema has been in use in the LCG/EGEE production Grid since the first version was defined in 2002. In 2007 a major redesign of GLUE, version 2.0, was started in the context of the Open Grid Forum following the creation of the GLUE Working Group. This process has taken input from a number of Grid projects, but as a major user of the version 1 schema LCG/EGEE has had a...
Mr
Alexander Zaytsev
(Budker Institute of Nuclear Physics (BINP))
26/03/2009, 16:30
CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are precision measurements of hadronic cross sections, study of known and search for new vector mesons, study of the ppbar a nnbar production...
Mr
Pierre VANDE VYVRE
(CERN)
26/03/2009, 16:30
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time available per year for heavy ion and to...
Dr
Jukka Klem
(Helsinki Institute of Physics HIP)
26/03/2009, 16:30
The Compact Muon Solenoid (CMS) is one of the LHC (Large Hadron Collider) experiments at CERN. CMS computing relies on different grid infrastructures to provide calculation and storage resources. The major grid middleware stacks used for CMS computing are gLite, OSG and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) builds one of the Tier-2 centers for CMS computing....
Ulrich Schwickerath
(CERN)
26/03/2009, 16:30
Instrumentation of jobs throughout its lifecycle is not obvious, as
they are quite independent after being submitted, crossing multiple
environments and locations until landing on a worker node. In
order to measure correctly the resources used at each step, and to compare
it with the view from a Fabric Infrastructure, we propose a solution
using Messaging System for the Grids (MSG)...
Dr
Alessandro di Girolamo
(CERN IT/GS), Dr
Andrea Sciaba
(CERN IT/GS), Dr
Elisa Lanciotti
(CERN IT/GS), Dr
Nicolo Magini
(CERN IT/GS), Dr
Patricia Mendez Lorenzo
(CERN IT/GS), Dr
Roberto Santinelli
(CERN IT/GS), Dr
Simone Campana
(CERN IT/GS), Dr
Vincenzo Miccio
(CERN IT/GS)
26/03/2009, 16:30
In a few months, the four LHC detectors will collect data at a significant rate that is expected to ramp-up to around 15PB per year. To process such a large quantity of data, the experiments have developed over the last years distributed computing models that build on the overall WLCG service. These implement the different services provided by the gLite middleware into the computing models of...
Dr
Oxana Smirnova
(Lund University / NDGF)
26/03/2009, 16:50
The Advanced Resource Connector (ARC) middleware introduced by
NorduGrid is one of the leading Grid solutions used by scientists
worldwide. Its simplicity, reliability and portability, matched by
unparalleled efficiency, make it attractive for large-scale facilities
like the Nordic DataGrid Facility (NDGF) and its Tier1 center, and
also for smaller scale projects. Being well-proven in...
Prof.
joel snow
(Langston University)
26/03/2009, 16:50
DZero uses a variety of resources on four continents to pursue a
strategy of flexibility and automation in the generation of simulation
data. This strategy provides a resilient and opportunistic system
which ensures an adequate and timely supply of simulation data to
support DZero's physics analyses. A mixture of facilities, dedicated
and opportunistic, specialized and generic, large...
Fred Luehring
(Indiana University)
26/03/2009, 16:50
We update our CHEP06 presentation on the ATLAS experiment software
infrastructure used to build, validate, distribute, and document the ATLAS
offline software. The ATLAS collaboration's computational resources and
software developers are distributed around the globe in more then 30 counties.
The ATLAS offline code base is currently over 5 MSLOC in 10000+ C++
classes organized into about...
Lorenzo Moneta
(on behalf of the ROOT, TMVA, RooFit and RooStats teams)
26/03/2009, 16:50
ROOT, a data analysis framework, provides advanced mathematical and statistical methods needed by the LHC experiments for analyzing their data. In addition, the ROOT distribution include packages such as TMVA, which provides advanced multivariate analysis tools for both classification and regression, and RooFit for performing data modeling and complex fitting.
Recently a large effort is...
Dr
Jose Antonio Coarasa Perez
(Department of Physics - Univ. of California at San Diego (UCSD) and CERN, Geneva, Switzerland)
26/03/2009, 16:50
The CMS online cluster consists of more than 2000 computers, mostly under Scientific Linux CERN, running the 10000 applications instances responsible for the data acquisition and experiment control on a 24/7 basis.
The challenging dimension of the cluster constrained the design and implementation of the infrastructure:
- The critical nature of the control applications demands a tight...
Giuseppe Codispoti
(Dipartimento di Fisica)
26/03/2009, 16:50
The CMS experiment at LHC started using the Resource Broker (by the EDG and LCG projects) to submit production and analysis jobs to distributed computing resources of the WLCG infrastructure over 6 years ago. In 2006 it started using the gLite Workload Management System (WMS) and Logging & Bookkeeping (LB). In current configuration the interaction with the gLite-WMS/LB happens through the CMS...
Gabriele Garzoglio
(FERMI NATIONAL ACCELERATOR LABORATORY)
26/03/2009, 17:10
The Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) have a common security model, based on Public Key Infrastructure. Grid resources grant access to users because of their membership in a Virtual Organization (VO), rather than on personal identity. Users push VO membership information to resources in the form of identity attributes, thus declaring that resources will be...
Daniele Spiga
(Universita degli Studi di Perugia & CERN)
26/03/2009, 17:10
CMS has a distributed computing model, based on a hierarchy of tiered regional computing centres. However, the end physicist is not interested in the details of the computing model nor the complexity of the underlying infrastructure, but only to access and use efficiently and easily the remote services. The CMS Remote Analysis Builder (CRAB) is the official CMS tool that allows the access to...
Massimo Sgaravatto
(INFN Padova)
26/03/2009, 17:10
In this paper we describe the use of CREAM and CEMON for job
submission and management within the gLite Grid middleware. Both CREAM
and CEMON address one of the most fundamental operations of a Grid
middleware, that is job submission and management. Specifically, CREAM
is a job management service used for submitting, managing and
monitoring computational jobs. CEMON is an event...
Dr
Marian Zvada
(Fermilab)
26/03/2009, 17:30
Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources.
Nowadays, CDF experiment is increasingly relying on...
Mr
Maxim Grigoriev
(FERMILAB)
26/03/2009, 17:30
Fermilab hosts the US Tier-1 center for data storage and analysis of the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experi
ment. To satisfy operational requirements for the LHC networking model, the networking group at Fermilab, in collaboration with Internet2
and ESnet, is participating in the perfSONAR-PS project. This collaboration has created a collection of network...
Dr
Alessandra Doria
(INFN Napoli)
26/03/2009, 17:30
An optimized use of the grid computing resources in the ATLAS experiment requires the enforcement of a mechanism of job priorities and of resource sharing among the different activities inside the ATLAS VO. This mechanism has been implemented through the VOViews publication in the information system and the fair share implementation per UNIX group in the batch system. The VOView concept...
Yoshiji Yasu
(High Energy Accelerator Research Organization (KEK))
26/03/2009, 17:30
DAQ-Middleware is a software framework of network-distributed DAQ system based on Robot Technology Middleware, which is an international standard of Object Management Group (OMG) in Robotics and developed by AIST. DAQ-Component is a software unit of DAQ Middleware. Basic components are already developed. For examples, Gatherer is a readout component, Logger is a logging component, Monitor is...
Mr
Dmitri Konstantinov
(IHEP Protvino)
26/03/2009, 17:30
The Generator Services project collaborates with the Monte Carlo
generators authors and with the LHC experiments in order to prepare
validated LCG compliant code for both the theoretical and the
experimental communities at the LHC. On the one side it provides the
technical support as far as the installation and the maintenance of
the generators packages on the supported platforms is...
Andrea Ventura
(INFN Lecce, Universita' degli Studi del Salento, Dipartimento di Fisica, Lecce)
26/03/2009, 17:30
The ATLAS experiment CERN's Large Hadron Collider has been projected and realized for new discoveries in High Energy Physics as well as for precision measurements of Standard Model parameters. To satisfy the limited data acquisition capability, at the LHC project luminosity, the ATLAS trigger system will have to select a very small rate of physically interesting events (~200 Hz) among about 40...
Mogens Dam
(Niels Bohr Institute)
26/03/2009, 17:50
The ATLAS tau trigger is a challenging component of the online event
selection, as it has to apply a rejection of 10^6 in a very short time
with a typical signal efficiency of 80%. Whilst in the first hardware
level narrow calorimeter jets are selected, in the second and
third software levels candidates are refined on base of simple but
fast (second level) and slow but accurate (third...
Dr
Andrei TSAREGORODTSEV
(CNRS-IN2P3-CPPM, MARSEILLE)
26/03/2009, 17:50
DIRAC, the LHCb community Grid solution, was considerably
reengineered in order to meet all the requirements for processing the data
coming from the LHCb experiment. It is covering all the tasks starting
with raw data transportation from the experiment area to the grid storage,
data processing up to the final user analysis. The reengineered DIRAC3
version of the system includes a...
Giovanni Organtini
(Univ. + INFN Roma 1)
26/03/2009, 17:50
The Electromagnetic Calorimeter (ECAL) of the CMS experiment at the LHC
is made of about 75000 scintillating crystals.
The detector properties must be continuously monitored
in order to ensure the extreme stability and precision required by its design.
This leads to a very large volume of non-event data to be accessed continuously by
shifters, experts, automatic monitoring tasks,...
Mr
Philip DeMar
(FERMILAB)
26/03/2009, 17:50
Fermilab has been one of the earliest sites to deploy data circuits in production for wide-area high impact data movement. The US-CMS Tier-1 Center at Fermilab uses end-to-end (E2E) circuits to support data movement with the Tier-0 Center at CERN, as well as with all of the US-CMS Tier-2 sites. On average, 75% of the network traffic into and out of the Laboratory is carried on E2E circuits....
Ian Fisk
(Fermi National Accelerator Laboratory (FNAL))
26/03/2009, 18:10
CMS is the the process of commissioning a complex detector and a globally distributed computing model simultaneously. The represents a unique challenge for the current generation of experiments. Even at the beginning there is not sufficient analysis or organized processing resources at CERN alone. In this presentation we will discuss the unique computing challenges CMS expects to face during...
Dr
Ivan Kisel
(GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt)
26/03/2009, 18:10
The CBM Collaboration builds a dedicated heavy-ion experiment to investigate the properties of highly compressed baryonic matter as it is produced in nucleus-nucleus collisions at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany. This requires the collection of a huge number of events which can only be obtained by very high reaction rates and long data taking periods....
Robert Petkus
(Brookhaven National Laboratory)
26/03/2009, 18:10
Robust, centralized system and application logging services are vital to all computing organizations, regardless of size. For the past year, the RHIC/USATLAS Computing Facility (RACF) has dramatically augmented the utility of logging services with Splunk. Splunk is a powerful application that functions as a log search engine, providing fast, real-time access to data from servers,...
Dr
Alina Grigoras
(CERN PH/AIP), Dr
Andreas Joachim Peters
(CERN IT/DM), Dr
Costin Grigoras
(CERN PH/AIP), Dr
Fabrizio Furano
(CERN IT/GS), Dr
Federico Carminati
(CERN PH/AIP), Dr
Latchezar Betev
(CERN PH/AIP), Dr
Pablo Saiz
(CERN IT/GS), Dr
Patricia Mendez Lorenzo
(CERN IT/GS), Dr
Predrag Buncic
(CERN PH/SFT), Dr
Stefano Bagnasco
(INFN/Torino)
26/03/2009, 18:10
With the startup of LHC, the ALICE detector will collect data at a rate that, after two years, will reach 4PB per year. To process such a large quantity of data, ALICE has developed over ten years a distributed computing environment, called AliEn, integrated with the WLCG environment. The ALICE environment presents several original solutions, which have shown their viability in a number of...
Vasile Mihai Ghete
(Institut fuer Hochenergiephysik (HEPHY))
26/03/2009, 18:10
The CMS L1 Trigger processes the muon and calorimeter detector data using a complex system of custom hardware processors. A bit-level emulation of the trigger data processing has been developed. This is used to validate and monitor the trigger hardware, to simulate the trigger response in monte-carlo data, and for some components, to seed higher-level triggers. The multiple-use cases are...
Dr
Julius Hrivnac
(LAL)
27/03/2009, 10:00
Plenary
Dr
Ales Krenek
(MASARYK UNIVERSITY, BRNO, CZECH REPUBLIC)
27/03/2009, 11:00
Plenary
Dagmar Adamova
(Nuclear Physics Institute)
27/03/2009, 11:30
Plenary
Stella Shen
(Academia Sinica),
Vicky, Pei-Hua HUANG
(Academia Sinica)
27/03/2009, 12:30
Plenary
Commercial
oral
Commercial
oral