Dr
Jos Engelen
(CERN)
13/02/2006, 10:00
Garzoglio Gabriele
(FERMI NATIONAL ACCELERATOR LABORATORY)
13/02/2006, 11:00
In 2005, the DZero Data Reconstruction project processed 250 tera-bytes of data on
the Grid, using 1,600 CPU-years of computing cycles in 6 months. The large
computational task required a high-level of refinement of the SAM-Grid system, the
integrated data, job, and information management infrastructure of the RunII
experiments at Fermilab. The success of the project was in part due to the...
Mr
Lars Schley
(University Dortmund, IRF-IT, Germany)
13/02/2006, 11:00
This paper discusses an architectural approach to enhance job scheduling in data
intensive applications in HEP computing. First, a brief introduction to the current
grid system based on LCG/gLite is given, current bottlenecks are identified and
possible extensions to the system are described. We will propose an extended
scheduling architecture, which adds a scheduling framework on top...
Dr
Silvio Pardi
(DIPARTIMENTO DI MATEMATICA ED APPLICAZIONI "R.CACCIOPPOLI")
13/02/2006, 11:00
The INFN-GRID project allows experimenting and testing many different and innovative
solutions in the GRID environment. In this research ad development it is important to
find the most useful solutions for simplified the managment and access to the resources.
In the VIRGO laboratory in Napoli we have tested a non standard implementation based
on LCG 2.6.0 by using a diskless solution...
Mieczyslaw Krasny
(LPNHE, Uviversity Paris)
13/02/2006, 11:00
Traditionally, in the pre-LHC muti-purpose high-energy experiements the
diversification of their physics programs has been largely decoupled from the process
of the data-taking - physics groups could only influence the selection criteria of
recorded events according to predefined trigger menus. In particular, the
physics-oriented choice of subdetector data and the implementation of refined...
Mr
Andreas Wildauer
(UNIVERSITY OF INNSBRUCK)
13/02/2006, 11:00
The design of a general jet tagging algorithm for the ATLAS detector reconstruction
software is presented.
For many physics analyses, reliable and efficient flavour identification, 'tagging',
of jets is vital in the process of reconstructing the physics content of the event.
To allow for a broad range of identification methods emphasis is put on the
flexibility of the framework. A...
Dr
Rosa palmiero
(INFN and University of Naples)
13/02/2006, 11:00
The Grid technology is attracting a lot of interest, involving hundreds of
researchers and software engineers around the world. The characteristics of Grid
demand the developing of suitable monitoring system able to obtain the significant
information in order to make management decision and control system behaviour. In
this paper we are going to analyse a formal declarative interpreted...
Mr
Sven Karstensen
(DESY Hamburg)
13/02/2006, 11:00
The next generations of large colliders and their experiments will have the advantage
that groups from all over the world will participate with their competence to meet
the challenges of the future. Therefore itโs necessary to become even more global
than in the past, giving members the option of remote access to most controlling
parts of this facilities. The experience in the past has...
Dr
Joachim Flammer
(CERN)
13/02/2006, 11:00
gLite is the next generation middleware for grid computing. Born from the
collaborative efforts of more than 80 people in 12 different academic and industrial
research centers as part of the EGEE Project, gLite provides a bleeding-edge, best-
of-breed framework for building grid applications tapping into the power of
distributed computing and storage resources across the Internet....
Dr
Alexander Borissov
(University of Glasgow, Scotland, UK)
13/02/2006, 11:00
HERMES experiment at DESY has performed extensive measurements on diffractive
production of light vector mesons (rho^0, omega, phi) in the intermediate energy
region. Spin density matrix elements (SDMEs) were determined for exclusive
diffractive rho^0 and phi mesons and compared with results of high energy
experiments. Several methods for the extraction of SDMEs have been applied on...
Dr
Cristian Stanescu
(Istituto Nazionale Fisica Nucleare - Sezione Roma III)
13/02/2006, 11:00
The data taking of ARGO-YBJ experiment in Tibet is operational with 54 RPC clusters
installed and is moving rapidly to more than 100 clusters configuration. The paper
describes the processing of this phase experimental data , based on a local computer
farm. The software developed for the data management, job submission and information
retrieval is described together to the...
Dr
Alessandra Forti
(Univ.of Milano Faculty of Art), Dr
Chris Brew
(CCLRC - RAL)
13/02/2006, 11:00
For the BaBar Computing Group
We describe enhancements to the BaBar Experiment's distributed Monte Carlo generation
system to make use of European and North American GRID resources and present the
results with regard to BaBar's latest cycle of Monte-Carlo production. We compare job
success rates and manageability issues between GRID and non-GRID production and
present an investigation...
Vardan Gyurjyan
(JEFFERSON LAB)
13/02/2006, 11:00
Software agent based control system is implemented to control experiments running on
the CLAS detector at Jefferson Lab. Within the CLAS experiments DAQ, trigger,
detector and beam line control systems are both logically and physically separated,
and are implemented independently using a common software infrastructure. CLAS
experimental control system (ECS) was designed, using earlier...
Mr
Alexei Sibidanov
(Budker Institute of Nuclear Physics)
13/02/2006, 11:00
CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000
electron-positron collider, which is being commissioned at Budker Institute of
Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of
the experiment are study of known and search for new vector mesons, study of the
ppbar a nnbar production cross sections in the vicinity of the threshold and...
Mr
Alexander Zaytsev
(Budker Institute of Nuclear Physics (BINP))
13/02/2006, 11:00
CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000
electron-positron collider, which is being commissioned at Budker Institute of
Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of
the experiment are study of known and search for new vector mesons, study of the
ppbar a nnbar production cross sections in the vicinity of the threshold and...
Mr
Sergey Pirogov
(Budker Institute of Nuclear Physics)
13/02/2006, 11:00
CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000
electron-positron collider, which is being commissioned at Budker Institute of
Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of
the experiment are study of known and search for new vector mesons, study of the
ppbar a nnbar production cross sections in the vicinity of the threshold...
Mr
Elliott Wolin
(Jefferson Lab)
13/02/2006, 11:00
cMsg is a highly extensible open-source framework within which one can deploy
multiple underlying interprocess communication systems. It is powerful enough to
support asyncronous publish/subscribe communications as well as synchronous
peer-to-peer communications. It further includes a proxy system whereby client
requests are transported to a remote server that actually connects to the...
Dr
Nayana Majumdar
(Saha Institute of Nuclear Physics)
13/02/2006, 11:00
The three dimensional electrostatic field configuration in a multiwire proportional
chamber (MWPC) has been simulated using an efficient boundary element method (BEM)
solver set up to solve an integral equation of the first kind. To compute the charge
densities over the bounding surfaces representing the system for known potentials,
the nearly exact formulation of BEM has been implemented...
Dr
Monica Verducci
(European Organization for Nuclear Research (CERN))
13/02/2006, 11:00
The size and complexity of LHC experiments raise unprecedented challenges not only in
terms of detector design, construction and operation, but also in terms of software
models and data persistency. One of the more challenging tasks is the calibration of
the 375000 Monitored Drift Tubes, that will be used as precision tracking detectors
in the Muon Spectrometer of the ATLAS experiment. An...
Dr
Andreas Heiss
(FORSCHUNGSZENTRUM KARLSRUHE)
13/02/2006, 11:00
GridKa, the German Tier-1 center in the Worldwide LHC Computing Grid (WLCG), supports
all four LHC experiments, ALICE, ATLAS, CMS and LHCb as well as currently some
non-LHC high energy physics experiments. Several German and European Tier-2 sites
will be connected to GridKa as their Tier-1. We present technical and organizational
aspects pertaining the connection and support of the Tier-2s...
Moreno Marzolla
(INFN Padova)
13/02/2006, 11:00
Efficient and robust system for accessing computational resources and managing job
operations is a key component of any Grid framework designed to support large
distributed computing environment. CREAM (Computing Resource Execution And
Management) is a simple, minimal system designed to provide efficient processing of a
large number of requests for computation on managed resources....
Dr
Johannes Elmsheuser
(Ludwig-Maximilians-Universitat Mรผnchen)
13/02/2006, 11:00
The German LHC computing resources are built on the Tier 1 center at Gridka in
Karlsruhe and several planned Tier 2 centers. These facilities provide us with a
testbed on which we can evaluate current distributed analysis tools. Various aspects
of the analysis of simulated data using LCG middleware and local batch systems have been
tested and evaluated. Here we present our experiences with...
Prof.
Patrick Skubic
(University of Oklahoma)
13/02/2006, 11:00
Hadron Collider experiments in progress at Fermilabโs Tevatron and under construction
at the Large Hadron Collider (LHC) at CERN will record many petabytes of data in
pursuing the goals of understanding nature and searching for the origin of mass.
Computing resources required to analyze these data far exceed the capabilities of any
one institution. The computing grid has long been...
Dr
Ashok Agarwal
(Univeristy of Victoria)
13/02/2006, 11:00
The heterogeneity of resources in computational grids, such as the Canadian GridX1,
makes application deployment a difficult task. Virtual machine environments promise
to simplify this task by homogenizing the execution environment across the grid. One
such environment, Xen, has been demonstrated to be a highly performing virtual
machine monitor. In this work, we evaluate the...
Dr
Thomas Kuhr
(UNIVERSITY OF KARLSRUHE, GERMANY), Mr
Ulrich Kerzel
(UNIVERSITY OF KARLSRUHE, GERMANY)
13/02/2006, 11:00
The German Grid computing centre "GridKa" offers large computing and storing
facilities to the Tevatron and LHC experiments, as well as BaBar and Compass. It has
been the first large scale CDF cluster to adopt and use the FermiGrid software "SAM"
to enable users to perform data-intensive analyses. The system has been operated on
production level for about 2 years. We review the challenges...
Mr
Laurent GARNIER
(LAL-IN2P3-CNRS)
13/02/2006, 11:00
We want to do a short communication to present our first experience in C# and mono
within an OpenScientist context. Mainly attempt to integrate Inventor within a C#
context then within the native GUI API coming with C#. We want to point out too the
perspectives, for example within AIDA.
Daniela Rebuzzi
(Istituto Nazionale de Fisica Nucleare (INFN))
13/02/2006, 11:00
The Muon Digitization is the simulation of the Raw Data Objects (RDO), or the
electronic output, of the Muon Spectrometer. It has been recently completely
re-written to run within the Athena framework and to interface with the Geant4 Muon
Spectrometer detector simulation.
The digitization process consists of two steps: in the first step, the output of the
detector simulation, henceforth...
Dr
Ariel Garcia
(Forschungszentrum Karlsruhe, Karlsruhe, Germany)
13/02/2006, 11:00
The LHC's Computing Grid (LCG) middleware interfaces at each site with local
computing resources provided by a batch system. However, currently only the
PBS/Torque, LSF and Condor resource management systems are supported out of the box
in the middleware distribution. Therefore many computing centers serving scientific
needs other than HEP, which in many cases use other batch systems like...
Mr
Laurent GARNIER
(LAL-IN2P3-CNRS)
13/02/2006, 11:00
We want to do a short communication of a job done at LAL about integrating the
graphviz library within the OnX environment. graphviz is a well known library good at
visualizing a scene containing boxes connected by lines. The strength of this library
is in the routing algorithms that permit to connect boxes. For example, graphviz is
used by Doxygen to produce class diagrams. We want to...
Valeria Bartsch
(FERMILAB / University College London)
13/02/2006, 11:00
CDF has recently changed its data handling system from the DFC (Data File Catalogue)
system to the SAM (Sequential Access to Metadata) system. This change was done as a
preparation for distributed computing because SAM can handle distributed computing
and provides mechanisms which enable it to work together with GRID systems.
Experience shows that the usage of a new data handling system...
Dr
Surya Pathak
(Vanderbilt University)
13/02/2006, 11:00
Storing and accessing large volumes of data across geographically separated locations
or cutting across labs and universities in a transparent, reliable fashion is a
difficult problem. There is urgency to this problem with the commissioning of the LHC
around the corner (2007). The primary difficulties that need to be over come in order
to address this problem are policy driven secure...
179.
L-TEST: A FRAMEWORK FOR SIMPLIFIED TESTING OF DISTRIBUTED HIGH-PERFORMANCE COMPUTER SUB-SYSTEMS
Mr
Laurence Dawson
(Vanderbilt University)
13/02/2006, 11:00
Introducing changes to a working high-performance computing environment is typically
both necessary and risky. Testing these changes can be highly manpower intensive.
L-TEST supplies a framework that allows the testing of complex distributed systems
with reduced configuration. It reduces setting up a test to implementing the specific
tasks for that test. L-TEST handles three jobs that must...
Dr
Pavel Nevski
(BROOKHAVEN NATIONAL LABORATORY)
13/02/2006, 11:00
During last few years ATLAS has ran a serie of Data Challenges producing simulated
data used to understand the detector performace. Altogether more than 100 terabytes
of useful data are now spread over few dozens of storage elements on the GRID. With
the emergence of Tier1 centers and constant restructuring of storage elements there
is a need to consolidate the data placement in a more...
Dr
Ofer Rind
(Brookhaven National Laboratory), Ms
Zhenping Liu
(Brookhaven National Laboratory)
13/02/2006, 11:00
The Brookhaven RHIC/ATLAS Computing Facility serves as both the tier-0 computing
center for RHIC and the tier-1 computing center for ATLAS in the United States. The
increasing challenge of providing local and grid-based access to very large datasets
in a reliable, cost-efficient and high-performance manner, is being addressed by a
large-scale deployment of dCache, the distributed disk...
Dr
Donald Holmgren
(FERMILAB)
13/02/2006, 11:00
As part of the DOE SciDAC "National Infrastructure for Lattice Gauge
Computing" and DOE LQCD Projects, Fermilab builds and operates production
clusters for lattice QCD simulations for the US community. We currently operate two
clusters: a 128-node Pentium 4E Myrinet cluster, and a 520-node Pentium 640 Infiniband
cluster. We discuss the operation of these systems and examine...
Mr
Sylvain Reynaud
(IN2P3/CNRS)
13/02/2006, 11:00
It is broadly admitted that grid technologies have to deal with heterogeneity in both
computational and storage resources. In the context of grid operations, heterogeneity
is also a major concern, especially for worldwide grid projects as LCG and EGEE.
Indeed, the usage of various technologies, protocols and data formats induces
complexity. As learned from our experience on participating...
Dr
Armando Fella
(INFN, Pisa)
13/02/2006, 11:00
The increasing instantaneous luminosity of the Tevatron collider will soon cause the
computing requirements for data analysis and MC production to grow larger than the
dedicated CPU resources that will be available. In order to meet future demands, CDF
is investing in shared, Grid, resources. A significant fraction of opportunistic Grid
resources will be available to CDF before LHC era...
Mr
Andrew Cameron Smith
(CERN, University of Edinburgh)
13/02/2006, 11:00
LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data
transfer infrastructure developed to allow high bandwidth distribution of data across
the grid in accordance with the computing model. To enable reliable bulk replication
of data, LHCb's DIRAC system has been integrated with gLite's File Transfer Service
middleware component to make use of dedicated network...
Arthur Kreymer
(FERMILAB)
13/02/2006, 11:00
The SAM data handling system has been deployed successfully by the Fermilab D0 and
CDF experiments, managing Petabytes of data and millions of files in a Grid working
environment. D0 and CDF have large computing support staffs, have always managed
their data using file catalog systems, and have participated strongly in the
development of the SAM product. But we think that SAM's long term...
Mr
Marian ZUREK
(CERN, ETICS)
13/02/2006, 11:00
gLite is the next generation middleware for grid computing. Born from the
collaborative efforts of more than 80 people in 12 different academic and industrial
research centers as part of the EGEE Project, gLite provides a bleeding-edge,
best-of-breed framework for building grid applications tapping into the power of
distributed computing and storage resources across the Internet....
Mr
A.J. Wilson
(Rutherford Appleton Laboratory)
13/02/2006, 11:00
R-GMA is a relational implementation of the GGF's Grid Monitoring Architecture (GMA).
In some respects it can be seen as a virtual database (VDB), supporting the
publishing and retrieval of time-stamped tuples. The scope of an R-GMA installation
is defined by its schema and registry. The schema holds the table definitions and,
in future, the authorization rules. The registry holds a list...
Dr
Andrew McNab
(UNIVERSITY OF MANCHESTER)
13/02/2006, 11:00
GridSite provides a Web Service hosting framework for services written as native
executables (eg in C/C++) or scripting languages (such as Perl and Python.) These
languages are of particular relevance to HEP applications, which typically have large
investments of code and expertise in C++ and scripting languages.
We describe the Grid-based authentication and authorization environment...
Dr
Grigory Trubnikov
(Joint Institute for Nuclear Research, Dubna)
13/02/2006, 11:00
BETACOOL program developed by JINR electron cooling group is a kit of algorithms
based on common format of input and output files. The program is oriented to
simulation of the ion beam dynamics in a storage ring in presence of cooling and
heating effects. The version presented in this report includes three basic
algorithms: simulation of r.m.s. parameters of the ion distribution...
Dr
Sven Hermann
(Forschungszentrum Karlsruhe)
13/02/2006, 11:00
Forschungszentrum Karlsruhe is one of the largest science and engineering research
institutions in Europe. The resource centre GridKa as part of this science centre is
building up a Tier 1 centre for the LHC project. Embedded in the European grid
initiative EGEE, GridKa also manages the ROC (regional operation centre) for the
German Swiss region. The management structure of the ROC and its...
Dr
Tony Chan
(BROOKHAVEN NATIONAL LAB)
13/02/2006, 11:00
The operation and management of a heterogeneous large-scale, multi-purpose computer
cluster is a complex task given the competing nature of requests for resources by a
large, world-wide user base. Besides providing the bulk of the computational
resources to experiments at the Relativistic Heavy-Ion Collider (RHIC), this large
cluster is part of the U.S. Tier 1 Computing Center for the...
Mr
Wayne BETTS
(BROOKHAVEN NATIONAL LABORATORY)
13/02/2006, 11:00
For any large experiment with multiple sub-systems and their respective experts
spread throughout the world, real-time and near-real-time information accessible to a
wide audience is critical to efficiency and success. Large and varied amounts of
information about the current and past state of facilities and detector systems are
necessary, both for current running, and for eventual data...
Dr
Szymon Gadomski
(UNIVERSITY OF BERN, LABORATORY FOR HIGH ENERGY PHYSICS)
13/02/2006, 11:00
The Swiss ATLAS Computing prototype consists of clusters of PCs located at the
universities of Bern and Geneva (Tier 3) and at the Swiss National Supercomputing
Centre (CSCS) in Manno (Tier 2). In terms of software, the prototype includes ATLAS
off-line releases as well as middleware for running the ATLAS off-line in a
distributed way. Both batch and interactive use cases are supported....
Dr
Jukka Klem
(Helsinki Institute of Physics HIP)
13/02/2006, 11:00
Projects like SETI@home use computing resources donated by the general public for
scientific purposes. Many of these projects are based on the BOINC (Berkeley Open
Interface for Network Computing) software framework that makes it easier to set up
new public resource computing projects. BOINC is used at CERN for the LHC@home
project where more than 10000 home users donate time of their...
Mr
Fons Rademakers
(CERN)
13/02/2006, 11:00
Providing all components and designing good user interfaces requires from developers
to know and apply some basic principles. The different parts of the ROOT GUIs should
fit and complete each other. They must form a window via which users see the
capability of the software system and understand how to use them. If well-designed,
the user interface adds quality and inspires confidence...
Cristina Lazzeroni
(University of Cambridge), Dr
Raluca-Anca Muresan
(Oxford University)
13/02/2006, 11:00
The LHCb experiment will make high precision studies of CP violation and other rare
phenomena in B meson decays. Particle identification, in the momentum range from
~2-100 GeV/c, is essential for this physics programme, and will be provided by two
Ring Imaging Cherenkov (RICH) detectors. The experiment will use several levels of
trigger to reduce the 10MHz rate of visible interactions to...
Mr
Timur Perelmutov
(FNAL)
13/02/2006, 11:00
dCache is a distributed storage system currently used to store and deliver data on a
petabyte scale in several large HEP experiments. Initially dCache was designed as a
disk front-end for robotic tape storage file systems. Lately, dCache systems have
been increased in scale by several orders of magnitude and considered for deployment
in US-CMS T2 centers lacking expensive tape robots. This...
Rene Brun
(CERN)
13/02/2006, 11:00
Overview and examples of:
-Common viewer architecture (TVirtualViewer3D interface and TBuffer3D shape
hierarchy) used by all 3D viewers.
-Significant features in the OpenGL viewer - in pad embedding, render styles,
composite (CSG/Boolean) shapes and clipping.
Dr
David Malon
(ARGONNE NATIONAL LABORATORY)
13/02/2006, 11:00
ATLAS has deployed an inter-object association infrastructure that allows the
experiment to track at the object level what data have been written and where, and to
assign both object-level and process-level labels to identify data objects for later
retrieval. This infrastructure provides the foundation for opportunistic run-time
navigation to upstream data, and in principle supports both...
Dr
Sinisa Veseli
(Fermilab)
13/02/2006, 11:00
SAMGrid presently relies on the centralized database for providing several services
vital for the system operation. These services are all encapsulated in the SAMGrid
Database Server, and include access to file metadata and replica catalogs, dataset
and processing bookkeeping, as well as the runtime support for the SAMGrid station
services. Access to the centralized database and DB Servers...
Dr
Sinisa Veseli
(Fermilab)
13/02/2006, 11:00
SAMGrid is a distributed (CORBA-based) HEP data handling system presently used by
three running experiments at Fermilab: D0, CDF and MINOS. User access to the SAMGrid
services is provided via Python and C++ client APIs, which handle the low-level CORBA
calls. Although the use of SAMGrid API's is fairly straightforward and very well
documented, in practice SAMGrid users are facing numerous...
Dr
Marcin Nowak
(BROOKHAVEN NATIONAL LABORATORY)
13/02/2006, 11:00
The ATLAS event data model will almost certainly change over time. ATLAS must retain
the ability to read both old and new data after such a change, regulate the
introduction of such changes, minimize the need to run massive data conversion jobs
when such changes are introduced, and maintain the machinery to support such data
conversions when they are unavoidable. In database literature,...
Dr
Philip Clark
(University of Edinburgh)
13/02/2006, 11:00
ScotGrid is a distributed Tier-2 computing centre formed as a collaboration between
the Universities of Durham, Edinburgh and Glasgow, as part of the UK's national
particle physics grid, GridPP. This paper describes ScotGrid's current resources by
institute and how these were configured to enable participation in the LCG service
challenges. In addition, we outline future development plans...
Dr
Valerie GAUTARD
(CEA-SACLAY)
13/02/2006, 11:00
The muon spectrometer of the ATLAS experiment aims at reconstructing very high
energy muon tracks (up to 1 TeV) with a transverse momentum resolution better than
10 %. For this purpose a resolution of 50 micrometer on the sagitta of tracks has to
be achieved. Each muon track is measured with three wire chambers stations placed
inside an air core toroid magnet (the chambers seat around...
Dr
Jan BALEWSKI
(Indiana University Cyclotron Facility)
13/02/2006, 11:00
One of the world's largest time projection chambers (TPC) has been used at STAR for
reconstruction of collisions at luminosities yielding thousands of piled-up
background tracks resulting from few hundreds pp minBias background events or several
heavy ion background events, respectively.
The combination of TPC tracks and trigger detector data used for tagging of tracks
are sufficient to...
Dr
Jamie Shiers
(CERN)
13/02/2006, 11:00
Walter Lampl
(Department of Physics, University of Arizona)
13/02/2006, 11:00
The event data model for the ATLAS calorimeters in the reconstruction software is
described, starting from the raw data to the analysis domain calorimeter data. The
data model includes important features like compression strategies with insignificant
loss of signal precision, flexible and configurable data content for high level
reconstruction objects, and backward navigation from the...
Dr
Tony Chan
(BROOKHAVEN NATIONAL LAB)
13/02/2006, 11:00
Monitoring a large-scale computing facility is evolving from a passive to a more active
role in the LHC era, from monitoring the health, availability and performance of the
facility to taking a more active and automated role in restoring availability, updating
software and becoming a meta-scheduler for batch systems. This talk will discuss the
experiences of the RHIC and ATLAS U.S. Tier...
Garzoglio Gabriele
(FERMI NATIONAL ACCELERATOR LABORATORY)
13/02/2006, 11:00
The SAM-Grid system is an integrated data, job, and information management
infrastructure. The SAM-Grid addresses the distributed computing needs of the
experiments of RunII at Fermilab. The system typically relies on SAM-Grid services
deployed at the remote facilities in order to manage the computing resources. Such
deployment requires special agreements with each resource provider and it...
Dr
Igor Sfiligoi
(INFN Frascati)
13/02/2006, 11:00
The CDF software model was developed with dedicated resources in mind. One of the
main assumptions is to have a large set of executables, shared libraries and
configuration files on a shared file system. As CDF is moving toward a Grid model,
this assumption is limiting the general physics analysis to only a small set of CDF
friendly sites with the appropriate file system installed.
...
Mr
Laurent GARNIER
(LAL-IN2P3-CNRS)
13/02/2006, 11:00
We want to do a short communication of a job done at LAL to visualize, within the OnX
interactive environment, HEP geometries accessed through the VGM abstract interfaces.
VGM and OnX had been presented at the Interlaken CHEP'04.
Dr
paolo branchini
(INFN)
13/02/2006, 11:00
We describe a VLSI implementation based on FPGA of a new greedy algorithm for
approximating minimum set covering in ad hoc wireless network applications.
The implementation makes the algorithm suitable for embedded and real-time architectures.
Dr
Paris Sphicas
(CERN)
13/02/2006, 11:30
Dr
Ashok Jhunjhunwala
(IIT, Chennai)
13/02/2006, 12:00
Mr
Vladimir Bahyl
(CERN IT-FIO)
13/02/2006, 14:00
Availability approaching 100% and response time converging to 0 are two factors that
users expect of any system they interact with. Even if the real importance of these
factors is a function of the size and nature of the project, todays users are rarely
tolerant of performance issues with system of any size.
Commercial solutions for load balancing and failover are plentiful. Citrix...
Dr
John Apostolakis
(CERN)
13/02/2006, 14:00
Geant4 has become an established tool, in production for the majority of LHC
experiments during the past two years, and in use in many other HEP experiments and
for applications in medical, space and other fields. Improvements and extensions to
its capabilities continue, while its physics modeling are refined and results are
accumulating for its validation for a variety uses. An overview...
Bebo White
(STANFORD LINEAR ACCELERATOR CENTER (SLAC))
13/02/2006, 14:00
Protรฉgรฉ is a free, open source ontology editor and knowledge-base framework developed
at Stanford University (http://protege.stanford.edu/). The application is based on
Java, is extensible, and provides a foundation for customized knowledge-based and
Semantic Web applications. Protรฉgรฉ supports Frames, XML Schema, RDF(S), and OWL. It
provides a "plug and play environment" that makes it a...
Dr
Jamie Shiers
(CERN)
13/02/2006, 14:00
Distributed Event production and processing
oral presentation
The LCG Service Challenges are aimed at achieving the goal of a production quality
world-wide Grid that meets the requirements of the LHC experiments in terms of
functionality and scale. This talk highlights the main goals of the Service Challenge
programme, significant milestones as well as the key services that have been
validated in production by the 4 LHC experiments.
The LCG...
Frank Wuerthwein
(UCSD for the OSG consortium),
Ruth Pordes
(Fermi National Accelerator Laboratory (FNAL)), Mrs
Ruth Pordes
(FERMILAB)
13/02/2006, 14:00
Grid middleware and e-Infrastructure operation
oral presentation
We report on the status and plans for the Open Science Grid Consortium, an open,
shared national distributed facility in the US which supports a multi-discplinary
suite of science applications. More than fifty University and Laboratory groups,
including 2 in Brazil and 3 in Asia, now have their resources and services
accessible to OSG. 16 Virtual Organizations have registered their...
Lawrence S. Pinsky
(University of Houston)
13/02/2006, 14:18
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy
Physics. FLUKA is a dynamic tool in the sense that it is being continually updated
and improved by the authors. We review the progress achieved since the last CHEP
Conference on the physics models, and some recent applications. From the point of
view of hadronic physics, most of the effort is still in...
Dr
Alexandre Vaniachine
(ANL)
13/02/2006, 14:20
In preparation for data taking, the ATLAS experiment has run a series of large-scale
computational exercises to test and validate distributed data grid solutions under
development. ATLAS experience in prototypes and production systems of Data Challenges
and Combined Test Team provided various database connectivity requirements for
applications: connection management, online-offline...
Marco Pieri
(University of California, San Diego, San Diego, California, USA)
13/02/2006, 14:20
The CMS Data Acquisition system is designed to build and filter events originating
from approximately 500 data sources from the detector at a maximum Level 1 trigger
rate of 100 kHz and with an aggregate throughput of 100 GByte/sec. For this purpose
different architectures and switch technologies have been evaluated. Events will be
built in two stages: the first stage, the FED Builder,...
Dr
Jรถrn Adamczewski
(GSI)
13/02/2006, 14:20
The new version 3 of the ROOT based GSI standard analysis framework GO4 (GSI Object
Oriented Online Offline) has been released. GO4 provides multithreaded remote
communication between analysis process and GUI process, a dynamically configurable
analysis framework, and a Qt based GUI with embedded ROOT graphics.
In the new version 3 a new internal object manager was developed. Its...
Bebo White
(STANFORD LINEAR ACCELERATOR CENTER (SLAC))
13/02/2006, 14:20
The Semantic Web shows great potential in the HEP community as an aggregation
mechanism for weakly structured data and a knowledge management tool for acquiring,
accessing, and maintaining knowledge within experimental collaborations. FOAF
(Friend-Of-A-Friend) (http://www.foaf-project.org/) is an RDFS/OWL ontology (some of
the fundamental Semantic Web technologies) for expressing...
Dr
Jukka Klem
(Helsinki Institute of Physics HIP)
13/02/2006, 14:20
Distributed Event production and processing
oral presentation
Public resource computing uses the computing power of personal computers that belong
to the general public. LHC@home is a public-resource computing project based on the
BOINC (Berkeley Open Interface for Network Computing) platform. BOINC is an open
source software system, developed by the team behind SETI@home, that provides the
infrastructure to operate a public-resource computing...
Robert Gardner
(University of Chicago)
13/02/2006, 14:20
Grid middleware and e-Infrastructure operation
oral presentation
We describe the purpose, architectural definition, deployment and operational
processes for the Integration Testbed (ITB) of the Open Science Grid (OSG). The ITB
has been successfully used to integrate a set of functional interfaces and services
required for the OSG Deployment. Activity leading to two major deployments of the OSG
grid infrastructure. We discuss the methods and logical...
Dr
Doris Ressmann
(Forschungszentrum Karlsruhe)
13/02/2006, 14:20
At GridKa an initial capacity of 1.5 PB online and 2 PB background storage is needed
for the LHC start in 2007. Afterwards the capacity is expected to grow almost
exponentially. No computing site will be able to keep this amount of data in online
storage, hence a highly accessible tape connection is needed. This paper describes a
high-performance connection of the online storage to an IBM...
Mr
Pedro Arce
(Cent.de Investigac.Energeticas Medioambientales y Tecnol. (CIEMAT))
13/02/2006, 14:36
GEANT4e is a package of the GEANT4 Toolkit that allows to propagate a track with its
error parameters. It uses the standard GEANT4 code to propagate the track and for the
track propagation it makes an helix approximation (with the step controlled by the
user) using the same equations as GEANT3/GEANE. We present here a first working
prototype of the GEANT4e package and compare its results...
Caitriana Nicholson
(University of Glasgow),
Caitriana Nicholson
(Unknown), Dr
David Malon
(ARGONNE NATIONAL LABORATORY)
13/02/2006, 14:40
The ATLAS experiment will deploy an event-level metadata system as a key component of
support for data discovery, identification, selection, and retrieval in its
multi-petabyte event store. ATLAS plans to use the LCG POOL collection
infrastructure to implement this system, which must satisfy a wide range of use cases
and must be usable in a widely distributed environment. The system...
Dr
Gennady KUZNETSOV
(Rutherford Appleton Laboratory, Didcot)
13/02/2006, 14:40
DIRAC is the LHCb Workload and Data Management system used for Monte Carlo
production, data processing and distributed user analysis. Such a wide variety of
applications requires a general approach to the tasks of job definition,
configuration and management.
In this paper, we present a suite of tools called a Production Console, which is a
general framework for job formulation,...
Dr
Frederik Orellana
(Institute of Nuclear and Particle Physics, Universitรฉ de Genรจve)
13/02/2006, 14:40
Distributed Event production and processing
oral presentation
In 2004, a full slice of the ATLAS detector was tested for 6 months in the H8
experimental area of the CERN SPS, in the so-called Combined Test Beam, with beams of
muons, pions, electrons and photons in the range 1 to 350 GeV. Approximately 90
million events were collected, corresponding to a data volume of 4.5 terabytes. The
importance of this exercise was two-fold: for the first time the...
Dr
Peter Malzacher
(Gesellschaft fuer Schwerionenforschung mbH (GSI))
13/02/2006, 14:40
Grid middleware and e-Infrastructure operation
oral presentation
The German Ministry for Education and Research announced a 100 million euro German
e-science initiative focused on: Grid computing, e-learning and knowledge management.
In a first phase started September 2005 the Ministry has made available 17 million
euro for D-Grid, which currently comprises six research consortia: five community
grids - HEP-Grid (high-energy physics),...
Mr
Deepak Narasimha
(VMRF Deemed University)
13/02/2006, 14:40
The objective of the paper is to advance the research in component-based software
development by including agent oriented software engineering techniques. Agent
oriented Component-based software development is the next step after object-oriented
programming that promises to overcome the problems, such as reusability and
complexity that have not yet been solved adequately with...
A. Vaniachine
(ANL)
13/02/2006, 15:00
Distributed Event production and processing
oral presentation
In the ATLAS Computing Model widely distributed applications require access to
terabytes of data stored in relational databases. In preparation for data taking,
the ATLAS experiment at the LHC has run a series of large-scale computational
exercises to test and validate multi-tier distributed data grid solutions under
development.
We present operational experience in ATLAS database...
Dr
Patrick Fuhrmann
(DESY)
13/02/2006, 15:00
For the last two years, the dCache/SRM Storage Element has been successfully
integrated into the LCG framework and is in heavy production at several dozens of
sites, spanning a range from single host installations up to those with some hundreds
of tera bytes of disk space, delivering more than 50 TByes per day to clients. Based
on the permanent feedback from our users and the detailed...
Dr
Chadwick Keith
(Fermilab)
13/02/2006, 15:00
Grid middleware and e-Infrastructure operation
oral presentation
FermiGrid is a cooperative project across the Fermilab Computing Division and its
stakeholders which includes the following 4 key components: Centrally Managed &
Supported Common Grid Services, Stakeholder Bilateral Interoperability, Development
of OSG Interfaces for Fermilab and Exposure of the Permanent Storage System. The
initial goals, current status and future plans for FermiGrid will...
Mr
Tigran Mkrtchyan Mkrtchyan
(Deutsches Elektronen-Synchrotron DESY)
13/02/2006, 16:00
After successfully deploying dCache over the last few years, the dCache team
reevaluated the potential of using dCache for extremely huge and heavily used
installations. We identified the filesystem namespace module as one of the components
which would very likely need a redesign to cope with expected requirements in the
medium term future.
Having presented the initial design of Chimera...
Caitriana Nicholson
(University of Glasgow)
13/02/2006, 16:00
Simulations have been performed with the grid simulator OptorSim using the expected
analysis patterns from the LHC experiments and a realistic model of the LCG at LHC
startup, with thousands of user analysis jobs running at over a hundred grid sites.
It is shown, first, that dynamic data replication plays a significant role in the
overall analysis throughput in terms of optimising job...
Mr
Michel Jouvin
(LAL / IN2P3)
13/02/2006, 16:00
Grid middleware and e-Infrastructure operation
oral presentation
Several HENP laboratories in Paris region have joined together to provide an LCG/EGEE
Tier2 center. This resource, called GRIF, will focus on LCG experiments but will also
be opened to EGEE users from other disciplines and to local users. It will provide
resources for both analysis and simulation and offer a large storage space (350 TB
planned by end of 2007).
This Tier2 will have...
Dr
Andy Buckley
(Durham University),
Andy Buckley
(University of Cambridge)
13/02/2006, 16:00
Accurate modelling of hadron interactions is essential for the precision analysis of
data from the LHC. It is therefore imperative that the predictions of Monte Carlos
used to model this physics are tested against relevant existing and future
measurements. These measurements cover a wide variety of reactions, experimental
observables and kinematic regions. To make this process more...
Mr
Philippe Canal
(FERMILAB)
13/02/2006, 16:00
Since version 4.01/03, we have continued to strenghten and improve the ROOT I/O
system. In particular we extended and optimized support for all STL collections,
including adding support for member-wise streaming. The handling of TTree objects
was also improved by adding support for indexing of chains, for using bitmap
algorithm to speed up search, and for accessing an sql table through...
Dr
Roger JONES
(LANCASTER UNIVERSITY)
13/02/2006, 16:00
Distributed Event production and processing
oral presentation
The ATLAS Computing Model is under continuous development. Previous exercises
focussed on the Tier-0/Tier-1 interactions, with an emphasis on the resource
implications and only a high-level view of the data and workflow. The work presented
here attempts to describe in some detail the data and control flow from the High
Level Trigger farms all the way through to the physics user. The...
Mr
Giulio Eulisse
(Northeastern University, Boston)
13/02/2006, 16:00
The CMS tracker has more than 50 millions channels organized in 16540 modules each
one being a complete detector. Its monitoring requires the creation, analysis and
storage of at least 4 histograms per module to be done every few minutes. The
analysis of these plots will be done by computer programs that will check the data
against some reference plots and send alarms to the operator in...
Dr
Donatella Lucchesi
(INFN Padova), Dr
Francesco Delli Paoli
(INFN Padova)
13/02/2006, 16:20
The CDF experiment has a new trigger which selects events depending on the
significance of the track impact parameters. With this trigger a sample of events
enriched of b and c mesons has been selected and it is used for several important
physics analysis like the Bs mixing. The size of the dataset is of about 20 TBytes
corresponding to an integrated luminosity of 1 fb-1 collected by CDF....
Dr
Gilbert Poulard Poulard
(CERN)
13/02/2006, 16:20
Distributed Event production and processing
oral presentation
The Large Hadron Collider at CERN will start data acquisition in 2007. The ATLAS (A
Toroidal LHC ApparatuS) experiment is preparing for the data handling and analysis
via a series of Data Challenges and production exercises to validate its computing
model and to provide useful samples of data for detector and physics studies. DC1 was
conducted during 2002-03; the main goals were to put in...
Dr
Ioannis Papadopoulos
(CERN, IT Department, Geneva 23, CH-1211, Switzerland)
13/02/2006, 16:20
The COmmon Relational Abstraction Layer (CORAL) is a C++ software system,developed
within the context of the LCG persistency framework, which provides vendor-neutral
software access to relational databases with defined semantics. The SQL-free public
interfaces ensure the encapsulation of all the differences that one may find among
the various RDBMS flavours in terms of SQL syntax and data...
Prof.
Arshad Ali
(National University of Sciences & Technology (NUST) Pakistan)
13/02/2006, 16:20
Grid middleware and e-Infrastructure operation
oral presentation
We present a report on Grid activities in Pakistan over the last three years and
conclude that there is significant technical and economic activity due to the
participation in Grid research and development. We started collaboration with
participation in the CMS software development group at CERN and Caltech in 2001. This
has led to the current setup for CMS production and the LCG Grid...
Dr
Roger Cottrell
(Stanford Linear Accelerator Center)
13/02/2006, 16:20
The future of computing for HENP applications depends increasingly on how well the
global community is connected. With South Asia and Africa accounting for about 36% of
the worldโs population, the issues of internet/network facilities are a major concern
for these regions if they are to successfully partake in scientific endeavors.
However, not only is the International bandwidth for these...
Fons Rademakers
(CERN),
Fons Rademakers
(CERN)
13/02/2006, 16:20
ROOT as a scientific data analysis framework provides a large selection data
presentation objects and utilities. The graphical capabilities of ROOT range from 2D
primitives to various plots, histograms, and 3D graphical objects. Its object-
oriented design and developments offer considerable benefits for developing object-
oriented user interfaces. The ROOT GUI classes support an...
Dr
Aatos Heikkinen
(HIP), Dr
Barbara Mascialino
(INFN Genova), Dr
Francesco Di Rosa
(INFN LNS), Dr
Giacomo Cuttone
(INFN LNS), Dr
Giorgio Russo
(INFN LNS), Dr
Giuseppe Antonio Pablo Cirrone
(INFN LNS), Dr
Maria Grazia Pia
(INFN GENOVA), Dr
Susanna Guatelli
(INFN Genova)
13/02/2006, 16:36
A project is in progress for a systematic, rigorous, quantitative validation of all
Geant4 physics models against experimental data, to be collected in a Geant4 Physics
Book.
Due to the complexity of Geant4 hadronic physics, the validation of Geant4 hadronic
models proceeds according to a bottom-up approach (i.e. from the lower energy range
up to higher energies): this approach allows...
Valeria Bartsch
(FERMILAB / University College London)
13/02/2006, 16:40
SAM is a data handling system that provides Fermilab HEP experiments of D0, CDF and
MINOS with the means to catalog, distribute and track the usage of their collected
and analyzed data. Annually, SAM serves petabytes of data to physics groups
performing data analysis, data reconstruction and simulation at various computing
centers across the world. Given the volume of the detector data, a...
Dr
Andrea Valassi
(CERN)
13/02/2006, 16:40
Since October 2004, the LCG Conditions Database Project has focused on the
development of COOL, a new software product for the handling of experiment
conditions data. COOL merges and extends the functionalities of the two previous
software implementations developed in the context of the LCG common project, which
were based on Oracle and MySQL. COOL is designed to minimise the...
Dr
William Badgett
(Fermilab)
13/02/2006, 16:40
The CDF Experiment's control and configuration system consists of several database
applications and supportive application interfaces in both Java and C++. The CDF
Oracle database server runson a SunOS platform and provide both configuration data,
real-time monitoring information and historical run conditions archiving. The Java
applications running on the Scientific Linux operating system...
Mr
Fons Rademakers
(CERN)
13/02/2006, 16:40
One of the main design challenges is the task of selecting appropriate Graphical User
Interface (GUI) elements and organizing them to meet successfully the application
requirements.
- How to choose and assign the basic user interface elements (so-called widgets from
`window gadgets') into the single panels of interactions?
- How to organize these panels to appropriate levels of the...
Mr
Gilles Mathieu
(IN2P3, Lyon), Ms
Helene Cordier
(IN2P3, Lyon), Mr
Piotr Nyczyk
(CERN)
13/02/2006, 16:40
Grid middleware and e-Infrastructure operation
oral presentation
The paper reports on the evolution of operational model which was set up in the
"Enabling Grids for E-sciencE" (EGEE) project, and on the implications of Grid
Operations in LHC Computing Grid (LCG).
The primary tasks of Grid Operations cover monitoring of resources and services,
notification of failures to the relevant contacts and problem tracking through a
ticketing system. Moreover,...
Dr
gokhan unel
(UNIVERSITY OF CALIFORNIA AT IRVINE AND CERN)
13/02/2006, 16:40
Distributed Event production and processing
oral presentation
The ATLAS experiment at LHC will start taking data in 2007. As preparative work, a
full vertical slice of the final higher level trigger and data acquisition (TDAQ)
chain, "the pre-series", has been installed in the ATLAS experimental zone. In the
pre-series setup, detector data are received by the readout system and next
partially analyzed by the second level trigger (LVL2). On...
Richard Cavanaugh
(University of Florida)
13/02/2006, 16:40
UltraLight is a collaboration of experimental physicists and network engineers whose
purpose is to provide the network advances required to enable petabyte-scale analysis
of globally distributed data. Current Grid-based infrastructures provide massive
computing and storage resources, but are currently limited by their treatment of the
network as an external, passive, and largely unmanaged...
Dr
Barbara Mascialino
(INFN Genova), Dr
Federico Ravotti
(CERN), Dr
Maria Grazia Pia
(INFN GENOVA), Dr
Maurice Glaser
(CERN), Dr
Michael Moll
(CERN), Dr
Riccardo Capra
(INFN Genova)
13/02/2006, 16:54
Monitoring radiation background is a crucial task for the operation of LHC
experiments. A project is in progress at CERN for the optimisation of the radiation
monitors for LHC experiments. A general, flexibly configurable simulation system
based on Geant4, designed to assist the engineering optimisation of LHC radiation
monitor detectors, is presented. Various detector packaging...
Mr
Sylvain Chapeland
(CERN)
13/02/2006, 17:00
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study
the physics of strongly interacting matter and the quark-gluon plasma at the CERN
Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System
(DAQ) is required to collect sufficient statistics in the short running time
available per year for heavy ion and to accommodate very...
Dr
Flavia Donno
(CERN), Dr
Marco Verlato
(INFN Padova)
13/02/2006, 17:00
Grid middleware and e-Infrastructure operation
oral presentation
The organization and management of the user support in a global e-science computing
infrastructure such as the Worldwide LHC Computing Grid (WLCG) is one of the
challenges of the grid. Given the widely distributed nature of the organization, and
the spread of expertise for installing, configuring, managing and troubleshooting the
grid middleware services, a standard centralized model could...
John Huth
(Harvard University)
13/02/2006, 17:00
The ATLAS experiment uses a tiered data Grid architecture that enables possibly
overlapping subsets, or replicas, of original datasets to be located across the ATLAS
collaboration. Many individual elements of these datasets can also be recreated
locally from scratch based on a limited number of inputs. We envision a time when a
user will want to determine which is more expedient,...
Mr
Matthias Schneebeli
(Paul Scherrer Institute, Switzerland)
13/02/2006, 17:00
This talk presents a new approach of writing analysis frameworks. We will point out a
way of generating analysis frameworks out of a short experiment description. The
generation process is completely experiment independent and can thus be applied to
any event based analysis.
The presentation will focus on a software package called ROME. This software
generates analysis frameworks which...
Dr
Roger JONES
(LANCASTER UNIVERSITY)
13/02/2006, 17:00
Following on from the LHC experimentsโ computing Technical Design Reports, HEPiX,
with the agreement of the LCG, formed a Storage Task Force. This group was to:
examine the current LHC experiment computing models; attempt to determine the data
volumes, access patterns and required data security for the various classes of data,
as a function of Tier and of time; consider the current...
Robert Petkus
(Brookhaven National Laboratory)
13/02/2006, 17:00
Distributed Event production and processing
oral presentation
The roles of centralized and distributed storage at the RHIC/USATLAS Computing
Facility have been undergoing a redefinition as the size and demands of computing
resources continues to expand. Traditional NFS solutions, while simple to deploy and
maintain, are marred by performance and scalability issues, whereas distributed
software solutions such as PROOF and rootd are application...
Dr
Douglas Smith
(STANFORD LINEAR ACCELERATOR CENTER)
13/02/2006, 17:00
The data production and analysis system of the BaBar Experiment has evolved through a
series of changes from a day when the first data were taken in May 1999. The changes,
in particular, have also involved persistent technologies used to store the event
data as well as a number of related databases. This talk is about CDB - the
distributed Conditions Database of the BaBar Experiment. The...
Dr
Satoru Kameoka
(High Energy Accelerator Research Organisation)
13/02/2006, 17:12
Geant4 is a toolkit to simulate the passage of a particle through matter based on
Monte Carlo method. Geant4 incorporates many of available experimental data and
theoretical models over wide energy region, extending its application scope not only
to high energy physics but also medical physics, astro-physics, etc. We have
developed a simulation framework for heavy ion therapy system based...
Mr
Rajesh Kalmady
(Bhabha Atomic Research Centre)
13/02/2006, 17:20
Grid middleware and e-Infrastructure operation
oral presentation
The LHC Computing Grid (LCG) connects together hundreds of sites consisting of
thousands of components such as computing resources, storage resources, network
infrastructure and so on. Various Grid Operation Centres (GOCs) and Regional
Operations Centres (ROCs) are setup to monitor the status and operations of the grid.
This paper describes Gridview, a Grid Monitoring and Visualization...
Mr
Francois Fluckiger
(CERN)
13/02/2006, 17:20
The openlab, created three years ago at CERN, was a novel concept: to involve leading
IT companies in the evaluation and the integration of cutting-edge technologies or
services, focusing on potential solutions for the LCG. The novelty lay in the
duration of the commitment (three years during which companies provided a mix of
in-kind and in-cash contributions), the level of the...
Matthew Norman
(University of California at San Diego)
13/02/2006, 17:20
Distributed Event production and processing
oral presentation
The increasing instantaneous luminosity of the Tevatron collider will cause the
computing requirements for data analysis and MC production to grow larger than the
dedicated CPU resources that will be available. In order to meet future demands, CDF
is investing in shared, Grid, resources. A significant fraction of opportunistic Grid
resources will be available to CDF before the LHC era...
Dr
Andreas Pfeiffer
(CERN, PH/SFT)
13/02/2006, 17:20
In the context of the LCG Applications Area the SPI, Software Process and
Infrastructure, project provides several services to the users in the LCG projects
and the experiments (mainly at the LHC). These services comprise the CERN Savannah
bug-tracking service, the external software service, and services concerning
configuration management and applications build, as well as software...
Prof.
Adele Rimoldi
(University of Pavia)
13/02/2006, 17:30
The simulation program for the ATLAS experiment at CERN is currently in a full
operational mode and integrated into the ATLASโs common analysis framework, ATHENA.
The OO approach, based on GEANT4, and in use during the DC2 data challenge has been
interfaced within ATHENA and to GEANT4 using the LCG dictionaries and Python
scripting. The robustness of the application was proved during the...
Mr
Sergio Andreozzi
(INFN-CNAF)
13/02/2006, 17:40
Grid middleware and e-Infrastructure operation
oral presentation
The Grid paradigm enables the coordination and sharing of a large number of
geographically-dispersed heterogeneous resources that are contributed by different
institutions. These resources are organized into virtual pools and assigned to group
of users. The monitoring of such a distributed and dynamic system raises a number of
issues like the need for dealing with administrative...
Dr
Ashok Agarwal
(Department of Physics and Astronomy, University of Victoria, Victoria, Canada)
13/02/2006, 17:40
Distributed Event production and processing
oral presentation
GridX1, a Canadian computational Grid, combines the resources of various Canadian
research institutes and universities through the Globus Toolkit and the CondorG
resource broker (RB). It has been successfully used to run ATLAS and BaBar simulation
applications. GridX1 is interfaced to LCG through a RB at the TRIUMF Laboratory
(Vancouver), which is an LCG computing element, and ATLAS jobs...
Dr
Sergey Linev
(GSI DARMSTADT)
13/02/2006, 17:40
ROOT already has powerful and flexible I/O, which potentially can be used for storage
of objects data in SQL databases. Usage of ROOT I/O together with SQL database will
provide advanced functionality like: guarantee of data integrity, logging of data
changes, possibility to rollback changes and lot of other features, provided by
modern databases.
At the same time data representation...
Roger Jones
(Lancaster University)
13/02/2006, 17:48
The project โEvtGen in ATLASโ has the aim of accommodating EvtGen into the LHC-ATLAS
context. As such it comprises both physics and software aspects of the development.
ATLAS has developed interfaces to enable the use of EvtGen within the experiment's
object-oriented simulation and data-handling framework ATHENA, and furthermore has
enabled the running of the software on the LCG. ...
Dr
Beat Jost
(CERN)
14/02/2006, 09:00
Dr
Elizabeth Sexton-Kennedy
(FNAL)
14/02/2006, 09:30
Dr
Martin Purschke
(BNL)
14/02/2006, 10:00
Dr
Tony Hey
(Microsoft, UK)
14/02/2006, 11:00
Dr
David Axmark
(MySQL)
14/02/2006, 11:30
Dr
Alan Gara
(IBM T. J. Watson Research Center)
14/02/2006, 12:00
Dr
Massimo Lamanna
(CERN)
14/02/2006, 14:00
The ARDA project focuses in delivering analysis prototypes together with the LHC
experiments. Each experiment prototype is in principle independent but commonalities
have been observed. The first level of commonality is represented by mature projects
which can be effectively shared across different users. The best example is GANGA,
providing a toolkit to organize usersโ activity,...
Dr
Maya Stavrianakou
(FNAL)
14/02/2006, 14:00
The CMS simulation based on the Geant4 toolkit and the CMS object-oriented framework
has been in production for almost two years and has delivered a total of more than a
100 M physics events for the CMS Data Challenges and Physics Technical Design Report
studies. The simulation software has recently been successfully ported to the new CMS
Event-Data-Model based software framework. In this...
Subir Sarkar
(INFN-CNAF)
14/02/2006, 14:00
Distributed Event production and processing
oral presentation
Higher instantaneous luminosity of the Tevatron Collider forces large increases in
computing requirements for CDF experiment which has to be able to cover future needs
of data analysis and MC production. CDF can no longer afford to rely on dedicated
resources to cover all of its needs and is therefore moving toward shared, Grid,
resources. CDF has been relying on a set of CDF Analysis...
Mr
Dinesh Sarode
(Computer Division, BARC, Mumbai-85, India)
14/02/2006, 14:00
Today we can have huge datasets resulting from computer simulations (CFD, physics,
chemistry etc) and sensor measurements (medical, seismic and satellite). There is
exponential growth in computational requirements in scientific research. Modern
parallel computers and Grid are providing the required computational power for the
simulation runs. The rich visualization is essential in...
Dr
Maria Cristina Vistoli
(Istituto Nazionale di Fisica Nucleare (INFN))
14/02/2006, 14:00
Grid middleware and e-Infrastructure operation
oral presentation
Moving from a National Grid Testbed to a Production quality Grid service for the HEP
applications requires an effective operations structure and organization, proper user
and operations support, flexible and efficient management and monitoring tools.
Moreover the middleware releases should be easily deployable using flexible
configuration tools, suitable for various and different local...
Dr
Benedetto Gorini
(CERN)
14/02/2006, 14:00
The Trigger and Data Acquisition system (TDAQ) of the ATLAS experiment at the CERN
Large Hadron Collider is based on a multi-level selection process and a hierarchical
acquisition tree. The system, consisting of a combination of custom electronics and
commercial products from the computing and telecommunication industry, is required to
provide an online selection power of 105 and a total...
Hegoi Garitaonandia Elejabarrieta
(Instituto de Fisica de Altas Energias (IFAE))
14/02/2006, 14:00
ATLAS Trigger & DAQ software, with six Gbytes per release, will be installed in about
two thousand machines in the final system. Already during the development phase, it
is tested and debugged in various Linux clusters of different sizes and network
topologies. For the distribution of the software across the network there are, at
least, two possible aproaches: fixed routing points, and...
Mr
Andreas Salzburger
(UNIVERSITY OF INNSBRUCK)
14/02/2006, 14:18
Various systematic physics and detector performance studies with the ATLAS detector
require very large event samples. To generate those samples, a fast simulation
technique is used instead of the full detector simulation, which often takes too much
effort in terms of computing time and storage space. The widely used ATLAS fast
simulation program ATLFAST, however, is based on intial four...
Dr
Valeri FINE
(BROOKHAVEN NATIONAL LABORATORY)
14/02/2006, 14:20
Distributed Event production and processing
oral presentation
Job tracking, i.e. monitoring bundle of jobs or individual job behavior from
submission to completion, is becoming very complicated in the heterogeneous Grid
environment.
This paper presents the principles of an integrating tracking solution based on
components already deployed at STAR, none of which are experiment specific: a Generic
logging layer and the STAR Unified Meta-Scheduler...
Dr
David Lawrence
(Jefferson Lab)
14/02/2006, 14:20
The JLab Introspection Library (JIL) provides a level of introspection for C++
enabling object persistence with minimal user effort. Type information is extracted
from an executable that has been compiled with debugging symbols. The compiler itself
acts as a validator of the class definitions while enabling us to avoid implementing
an alternate C++ preprocessor to generate dictionary...
Mr
stuart WAKEFIELD
(Imperial College, University of London, London, UNITED KINGDOM)
14/02/2006, 14:20
BOSS (Batch Object Submission System) has been developed to provide logging and
bookkeeping and real-time monitoring of jobs submitted to a local farm or a grid
system. The information is persistently stored in a relational database for further
processing. By means of user-supplied filters, BOSS extracts the specific job
information to be logged from the standard streams of the job itself...
Anja Vest
(University of Karlsruhe)
14/02/2006, 14:20
Grid middleware and e-Infrastructure operation
oral presentation
Computer clusters at universities are usually shared among many groups. As an
example, the Linux cluster at the "Institut fuer Experimentelle Kernphysik" (IEKP),
University of Karlsruhe, is shared between working groups of the high energy physics
experiments AMS, CDF and CMS, and has successfully been integrated into the SAM grid
of CDF and the LHC computing grid LCG for CMS while it still...
Dr
Wenji Wu
(Fermi National Accelerator Laboratory)
14/02/2006, 14:20
The computing models for HEP experiments are becoming ever more globally distributed
and grid-based, both for technical reasons (e.g., to place computational and data
resources near each other and the demand) and for strategic reasons (e.g., to
leverage technology investments). To support such computing models, the network and
end systems (computing and storage) face unprecedented...
Marco Mambelli
(UNIVERSITY OF CHICAGO)
14/02/2006, 14:20
We describe the Capone workflow manager which was designed to work for Grid3 and the
Open Science Grid. It has been used extensively to run ATLAS managed and user
production jobs during the past year but has undergone major redesigns to improve
reliablility and scalability as a result of lessons learned (cite Prod paper). This
paper introduces the main features of the new system covering...
Mr
Sebastian Neubert
(Technical University Munich)
14/02/2006, 14:25
PANDA is a universal detector system, which is being designed in the scope of the
FAIR-Project at Darmstadt, Germany and is dedicated to high precision measurements of
hadronic systems in the charm quark mass region. At the HESR storage ring a beam of
antiprotons will interact with internal targets to achieve the desired luminosity of
2x10^32cm^-2s^-1. The experiment is designed for event...
Joanna Weng
(Karlsruhe/CERN)
14/02/2006, 14:36
An object-oriented package for parameterizing electromagnetic showers in the
framework of the Geant4 toolkit has been developed. This parameterization is based on
the algorithms in the GFLASH package (implemented in Geant3 / FORTRAN), but has been
adapted to the new simulation context of Geant4. This package can substitute the full
tracking of high energy electrons/positrons(normally form...
Dr
Steffen G. Kappler
(III. Physikalisches Institut, RWTH Aachen university (Germany))
14/02/2006, 14:40
Physics analyses at modern collider experiments enter a new dimension of event
complexity. At the LHC, for instance, physics events will consist of the final state
products of the order of 20 simultaneous collisions. In addition, a number of todayโs
physics questions is studied in channels with complex event topologies and
configuration ambiguities occurring during event analysis....
Mr
Laurence Field
(CERN)
14/02/2006, 14:40
Distributed Event production and processing
oral presentation
As a result of the interoperations activity between LHC Computing Grid (LCG) and Open
Science Grid (OSG), it was found that the information and monitoring space within
these grids is a crowded area with many closed end-to-end solutions that do not
interoperate. This paper gives the current overview of the information and monitoring
space within these grids and tries to find overlapping...
Mr
Laurence Field
(CERN)
14/02/2006, 14:40
Grid middleware and e-Infrastructure operation
oral presentation
Open Science Grid (OSG) and LHC Computing Grid (LCG) are two grid infrastructures
that were built independently on top of a Virtual Data Toolkit (VDT) core. Due to the
demands of the LHC Virtual Organizations (VOs), it has become necessary to ensure
that these grids interoperate so that the experiments can seamlessly use them as one
resource. This paper describes the work that was...
Mr
Giulio Eulisse
(Northeastern University, Boston)
14/02/2006, 14:40
We describe how a new programming paradigm dubbed AJAX (Asynchronous Javascript and
XML) has enabled us to develop highly-performant web-based graphics applications.
Specific examples are shown of our web clients for: CMS Event Display (real-time
Cosmic Challenge), remote detecotr monitoring with ROOT displays, and performat 3D
displays of GEANT4 descriptions of LHC detectors. The...
Igor Mandrichenko
(FNAL)
14/02/2006, 14:40
Fermilab is a high energy physics research lab that maintains a dynamic network which
typically supports around 10,000 active nodes. Due to the open nature of the
scientific research conducted at FNAL, the portion of the network used to support
open scientific research requires high bandwidth connectivity to numerous
collaborating institutions around the world, and must facilitate...
Mr
Florian Urmetzer
(Research Assistant in the ACET centre, The University of Reading, UK)
14/02/2006, 14:40
Ongoing research has shown that testing grid software is complex. Automated testing
mechanisms seem to be widely used, but are critically discussed on account of their
efficiency and correctness in finding errors. Especially when programming distributed
collaborative systems, structures get complex and systems get more error-prone. Past
projects done by the authors have shown that the...
Sebastian Robert Bablok
(Department of Physics and Technology, University of Bergen, Norway)
14/02/2006, 14:45
The HLT, integrating all major detectors of ALICE, is designed to analyse LHC events
online. A cluster of 400 to 500 dual SMP PCs will constitute the heart of the HLT
system. To synchronize the HLT with the other online systems of ALICE (Data
Acquisition (DAQ), Detector Control System (DCS), Trigger (TRG)) the Experiment
Control System (ECS) has to be interfaced. In order to do so, the...
Dr
Edward Moyse
(University of Massachusetts)
14/02/2006, 14:54
The event data model (EDM) of the ATLAS experiment is presented. For large
collaborations like the ATLAS experiment common interfaces and data objects are a
necessity to insure easy maintenance and coherence of the experiments software
platform over a long period of time. The ATLAS EDM improves commonality across the
detector subsystems and subgroups such as trigger, test beam...
Dr
Ketevi Adikle Assamagan
(Brookhaven National Laboratory),
PAT ATLAS
(ATLAS)
14/02/2006, 15:00
The physics program at the LHC includes precision tests of the Standard Model (SM),
the search for the SM Higgs boson up to 1 TeV, the search for the MSSM Higgs bosons
in the entire parameter space, the search for Super Symmetry, sensitivity to
alternative scenarios such as compositeness, large extra dimensions, etc. This
requires general purpose detectors with excellent performance....
Mr
Stuart Paterson
(University of Glasgow / CPPM, Marseille)
14/02/2006, 15:00
DIRAC is the LHCb Workload and Data Management system for Monte Carlo simulation,
data processing and distributed user analysis. Using DIRAC, a variety of resources
may be integrated, including individual PC's, local batch systems and the LCG grid.
We report here on the progress made in extending DIRAC for distributed user analysis
on LCG. In this paper we describe the advances in the...
Dr
Graeme A Stewart
(University of Glasgow)
14/02/2006, 15:00
Grid middleware and e-Infrastructure operation
oral presentation
Data management has proved to be one of the hardest jobs to do in a the grid
environment. In particular, file replication has suffered problems of transport
failures, client disconnections, duplication of current transfers and resultant
server saturation.
To address these problems the globus and gLite grid middlewares offer new services
which improve the resiliancy and robustness of...
Dr
Dantong Yu
(BROOKHAVEN NATIONAL LABORATORY), Dr
Dimitrios Katramatos
(BROOKHAVEN NATIONAL LABORATORY)
14/02/2006, 15:00
A DOE MICS/SciDac funded project, TeraPaths, deployed and prototyped the use of
differentiated networking services based on a range of new transfer protocols to
support the global movement of data in the high energy physics distributed computing
environment. While this MPLS/LAN QoS work specifically targets networking issues at
BNL, the experience acquired and expertise developed is...
Dr
Victor Daniel Elvira
(Fermi National Accelerator Laboratory (FNAL))
14/02/2006, 15:00
Monte Carlo simulations are a critical component of physics analysis in a large HEP
experiment such as CMS. The validation of the simulation sofware is therefore
essencial to guarantee the quality and accuracy of the Monte Carlo samples. CMS is
developing a Simulation Validation Suite (SVS) consisting of a set of packages
associated with the different sub-detector systems: tracker,...
Gordon Watts
(University of Washington)
14/02/2006, 15:05
Dร, one of two collider experiments at Fermilab's Tevatron, upgraded its DAQ system
for the start of Run II. The run started in March 2001, and the DAQ system was fully
operational shortly afterwards. The DAQ system is a fully networked system based on
Single Board Computers (SBCs) located in VME readout crates which forward their data
to a 250 node farm of commodity processors for trigger...
Dr
Christopher Jones
(CORNELL UNIVERSITY)
14/02/2006, 15:12
The new CMS Event Data Model and Framework that will be used for the high level
trigger, reconstruction, simulation and analysis is presented. The new framework is
centered around the concept of an Event. A data processing job is composed of a
series of algorithms (e.g., a track finder or track fitter) that run in a particular
order. The algorithms only communicate via data stored in...
Dr
Eric HJORT
(Lawrence Berkeley National Laboratory)
14/02/2006, 16:00
Distributed Event production and processing
oral presentation
This paper describes the integration of Storage Resource Management (SRM) technology
into the grid-based analysis computing framework of the STAR experiment at RHIC.
Users in STAR submit jobs on the grid using the STAR Unified Meta-Scheduler (SUMS)
which in turn makes best use of condor-G to send jobs to remote sites. However, the
result of each job may be sufficiently large that existing...
Dr
Lorenzo Moneta
(CERN)
14/02/2006, 16:00
LHC experiments obtain needed mathematical and statistical computational methods via
the coherent set of C++ libraries provided by the Math work package of the ROOT
project. We present recent developments of this work package, formed from the merge
of the ROOT and SEAL activities:
(1) MathCore, a new core library, has been developed as a self contained component
encompassing basic...
Dr
Frederick Luehring
(Indiana University)
14/02/2006, 16:00
ATLAS is one of the largest collaborations ever attempted in the physical sciences.
This paper explains how the software infrastructure is organized to manage
collaborative code development by around 200 developers with varying degrees of
expertise, situated in 30 different countries. We will describe how succeeding
releases of the software are built, validated and subsequently deployed to...
Dr
Da-Peng JIN
(IHEP (Institute of High Energy Physics, Beijing, China))
14/02/2006, 16:00
Physical study is the base of the hardware designs of the BES3 trigger system. It
includes detector simulations, generation and optimization of the sub-detectorsโ
trigger conditions, main trigger simulations (Combining the trigger conditions from
different detectors to find out the trigger efficiencies of the physical events and
the rejection factors of the backgrounds events.) and...
Federico Carminati
(CERN)
14/02/2006, 16:00
The ALICE Offline framework is now in its 8th year of development and is now close to
be used for data taking. This talk will provide a short description of the history of
AliRoot and then will describe the latest developments. The newly added alignment
framework, based on the ROOT geometrical modeller will be described. The experience
with the FLUKA MonteCarlo used for full detector...
Mr
Paolo Badino
(CERN)
14/02/2006, 16:00
Grid middleware and e-Infrastructure operation
oral presentation
In this paper we report on the lessons learned from the Middleware point of view
while running the gLite File Transfer Service (FTS) on the LCG Service Challenge 3
setup. The FTS has been designed based on the experience gathered from the Radiant
service used in Service Challenge 2, as well as the CMS Phedex transfer service. The
first implementation of the FTS was put to use in the...
Dr
Weidong Li
(IHEP, Beijing)
14/02/2006, 16:18
The BESIII is a general-purpose experiment for studying electron-positron collision
at BEPCII, which is currently under construction at IHEP, Beijing. The BESIII offline
software system is built on the Gaudi architecture. This contribution describes the
BESIII specific framework implementation for offline data processing and physics
analysis. And we will also present the development status...
Hans von der Schmitt
(MPI for Physics, Munich),
Hans von der Schmitt
(ATLAS)
14/02/2006, 16:20
The ATLAS detector at CERN's LHC will be exposed to proton-proton collisions at a
nominal rate of 1 GHz from beams crossing at 40 MHz. A three-level trigger system
will select potentially interesting events in order to reduce this rate to about 200
Hz. The first trigger level is implemented in custom-built electronics and firmware,
whereas the higher trigger levels are based on software. A...
Dr
David Cameron
(European Organization for Nuclear Research (CERN))
14/02/2006, 16:20
Grid middleware and e-Infrastructure operation
oral presentation
The ATLAS detector currently under construction at CERN's Large Hadron Collider
presents data handling requirements of an unprecedented scale. From 2008 the ATLAS
distributed data management (DDM) system must manage tens of petabytes of event data
per year, distributed around the world: the collaboration comprises 1800 physicists
participating from more than 150 universities and...
Dr
Chih-Hao Huang
(Fermi National Accelerator Laboratory)
14/02/2006, 16:20
ENSTORE is a very successful petabyte-scale mass storage system developed at
Fermilab. Since its inception in the late 1990s, ENSTORE has been serving the
Fermilab community, as well as its collaborators, and now holds more than 3 petabytes
of data on tape. New data is arriving at an ever increasing rate. One practical issue
that we are confronted with is: storage technologies have been...
Dr
Isidro Gonzalez Caballero
(Instituto de Fisica de Cantabria (CSIC-UC))
14/02/2006, 16:20
A typical HEP analysis in the LHC experiments involves the processing of data
corresponding to several million events, terabytes of information, to be analysed in
the last phases. Currently, processing one million events in a single modern
workstation takes several hours, thus slowing the analysis cycle. The desirable
computing model for a physicist would be closer to a High Performance...
Mr
Christoph Wissing
(University of Dortmund)
14/02/2006, 16:20
Distributed Event production and processing
oral presentation
The H1 Experiment at HERA records electron-proton collisions provided by beam
crossings of a frequency of 10 MHz. The detector has about half a million readout
channels and the data acquisition allows to log about 25 events per second with a
typical size of 100kB.
The increased event rates after the upgrade of the HERA accelerator at DESY led to a
more demanding usage of computing and...
Timur Perelmutov
(FERMI NATIONAL ACCELERATOR LABORATORY)
14/02/2006, 16:40
Grid middleware and e-Infrastructure operation
oral presentation
dCache collaboration actively works on the implementation and improvement of the
features and the grid support of dCache storage. It has delivered Storage Resource
Managers (SRM) interface, GridFtp server, Resilient Manager and Interactive Web
Monitoring tools. SRMs are middleware components whose function is to provide dynamic
space allocation and file management of shared storage...
Dr
Alberto Ribon
(CERN), Dr
Andreas Pfeiffer
(CERN), Dr
Barbara Mascialino
(INFN Genova), Dr
Maria Grazia Pia
(INFN GENOVA), Dr
Paolo Viarengo
(IST Genova)
14/02/2006, 16:40
Many Goodness-of-Fit tests have been collected in a new open-source Statistical
Toolkit: Chi-squared, Kolmogorov-Smirnov, Goodman, Kuiper, Cramer-von Mises,
Anderson-Darling, Tiku, Watson, as well as novel weighted formulations of some tests.
None of the Goodness-of-Fit tests included in the toolkit is optimal for any analysis
case. Statistics does not provide a universal recipe to...
Dr
Simone Campana
(CERN)
14/02/2006, 16:40
Distributed Event production and processing
oral presentation
The LHC Computing Grid Project (LCG) provides and operates the computing support and
infrastructure for the LHC experiments. In the present phase, the experiments systems
are being commissioned and the LCG Experiment Integration Support team provides
support for the integration of the underlying grid middleware with the experiment
specific components. The support activity during the...
Dr
Conrad Steenberg
(CALIFORNIA INSTITUTE OF TECHNOLOGY)
14/02/2006, 16:40
We present the architecture and implementation of a bi-directional system for
monitoring long-running jobs on large computational clusters. JobMon comprises an
asyncronous intra-cluster communication server and a Clarens web service on a head
node, coupled with a job wrapper for each monitored job to provide monitoring
information both periodically and upon request. The Clarens web service...
Mr
Gianluca Comune
(Michigan State University)
14/02/2006, 16:40
This paper descibes an analysis and conceptual design for the steering of the ATLAS
High Level Trigger (HLT). The steering is the framework that organises the event
selection software. It implements the key event selection strategies of the ATLAS
trigger, which are designed to minimise processing time and data transfers:
reconstruction within regions of interest, menu-driven selection and...
Dr
Gidon Moont
(GridPP/Imperial)
14/02/2006, 16:40
A working prototype portal for the LHC Computing Grid (LCG) is being customised for
use by the T2K 280m Near Detector software group. This portal is capable of
submitting jobs to the LCG and retrieving the output on behalf of the user. The T2K
specific developement of the portal will create customised submission systems for the
suites of production and analysis software being written by...
Stefano Argiro
(European Organization for Nuclear Research (CERN))
14/02/2006, 16:40
Releasing software for projects with large code bases is a challenging task. When
developers are geographically dispersed, often in different time zones, coordination
can be difficult. A successful release strategy is therefore paramount and clear
guidelines for all the stages of software development are required. The CMS
experiment recently started a major refactorization of its...
Dr
Denis Bertini
(GSI Darmstadt)
14/02/2006, 16:54
The simulation and analysis framework of the CBM collaboration will be presented. CBM
(Compressed Baryonic Matter) is an experiment at the future FAIR (Facility for
Antiproton and Ion Research) in Darmstadt. The goal of the experiment is to explore
the phase diagram of strongly interacting matter in high-energy nucleus-nucleus
collisions.
The Virtual Monte Carlo concept allows...
Kostas Kordas
(Laboratori Nazionali di Frascati (LNF))
14/02/2006, 17:00
The ATLAS experiment at the LHC will start taking data in 2007. Event data from
protonโproton collisions will be selected in a three level trigger system which
reduces the initial bunch crossing rate of 40 MHz at its first level trigger (LVL1)
to 75 kHz with a fixed latency of 2.5 ฮผs. The second level trigger (LVL2) collects
and analyses Regions of Interest (RoI) identified by LVL1 and...
Dr
Dantong Yu
(BROOKHAVEN NATIONAL LABORATORY), Dr
Xin Zhao
(BROOKHAVEN NATIONAL LABORATORY)
14/02/2006, 17:00
Grid middleware and e-Infrastructure operation
oral presentation
We describe two illustrative cases in which Grid middleware (GridFtp, dCache and SRM)
was used successfully to transfer hundreds of terabytes of data between BNL and its
remote RHIC and ATLAS collaborators. The first case involved PHENIX production data
transfers to CCJ, a regional center in Japan, during the 2005 RHIC run. Approximately
270TB of data, representing 6.8 billion polarized...
Dr
Pablo Garcia-Abia
(CIEMAT)
14/02/2006, 17:00
Distributed Event production and processing
oral presentation
In preparation for the start of the experiment, CMS must produce large quantities of
detailed full-detector simulation. In this presentation we will present the
experiencing with running official CMS Monte Carlo simulation on distributed
computing resources. We will present the implementation used to generate events using
the LHC Computing Grid (LCG-2) resources in Europe, as well as the...
klaus rabbertz
(Karlsruhe University)
14/02/2006, 17:00
Packaging and distribution of experiment-specific software becomes a complicated task
when the number of versions and external dependencies increases. With the advent of
Grid computing, the distribution and update process must become a simple, robust and
transparent step. Furthermore, one must take into account that running a particular
application requires setup of the appropriate...
Mr
Marco Corvo
(Cnaf and Cern)
14/02/2006, 17:00
CRAB (Cms Remote Analysis Builder) is a tool, developed by INFN within the CMS
collaboration, which provides to physicists the possibility to analyze large amount
of data exploiting the huge computing power of grid distributed systems. It's
currently used to analyze simulated data needed to prepare the Physics Technical
Design Report. Data produced by CMS are distributed among several...
Dr
Ilya Narsky
(California Institute of Technology), Mr
Julian Bunn
(CALTECH), Dr
Julian Bunn
(CALTECH),
Julian Bunn
(California Institute of Technology (CALTECH))
14/02/2006, 17:00
Modern analysis of high energy physics (HEP) data needs advanced statistical tools to
separate signal from background. A C++ package has been implemented to provide such
tools for the HEP community. The package includes linear and quadratic discriminant
analysis, decision trees, bump hunting (PRIM), boosting (AdaBoost), bagging and
random forest algorithms, and interfaces to the...
Shawn Mc Kee
(High Energy Physics)
14/02/2006, 17:00
We will describe the networking details of NSF-funded UltraLight project and report
on its status. The projectโs goal is to meet the data-intensive computing challenges
of the next generation of particle physics experiments with a comprehensive,
network-focused agenda. The UltraLight network is a hybrid packet- and
circuit-switched network infrastructure employing both โultrascaleโ...
Andreas.Morsch@cern.ch Morsch
(CERN)
14/02/2006, 17:12
The ALICE Offline Project has developed a virtual interface to the detector transport
code called Virtual Monte Carlo. It isolates the user code from changes of the
detector simulation package and hence allows a seamless transition from GEANT3 to
GEANT4 and FLUKA.
Moreover, a new geometrical modeler has been developed in collaboration with the ROOT
team, and successfully interfaced to...
Andreas Nowack
(Aaachen University),
Klaus Rabbertz
(Karlsruhe University)
14/02/2006, 17:20
We describe the various tools used by CMS to create and manage the packaging and
distribution of software, including the various CMS software packages and the
external components upon which CMS software depends. It is crucial to manage the
environment to ensure that the configuration is correct, consistent, and reproducible
at the many computing centres running CMS software. We describe...
Dr
Peter Elmer
(PRINCETON UNIVERSITY)
14/02/2006, 17:20
Distributed Event production and processing
oral presentation
The Monte Carlo Processing Service (MCPS) package is a Python based workflow
modelling and job creation package used to realise CMS Software workflows and create
executable jobs for different environments ranging from local node operation to wide
ranging distributed computing platforms. A component based approach to modelling
workflows is taken to allow both executable tasks as well as...
Dr
Les Cottrell
(Stanford Linear Accelerator Center (SLAC))
14/02/2006, 17:20
High Energy and Nuclear Physics (HENP) experiments generate unprecedented volumes
of data which need to be transferred, analyzed and stored. This in turn requires
the ability to sustain, over long periods, the transfer of large amounts of data
between collaborating sites, with relatively high throughput. Groups such as the
Particle Physics Data Grid (PPDG) and Globus are developing and...
Dr
Alberto De Min
(Politecnico di Milano)
14/02/2006, 17:20
In the last few decades operations research has made dramatic progress in providing
efficient algorithms and fast software implementations to solve practical problems
related to a wide range of disciplines, from logistics to finance, from political
sciences to digital image analysis. After a brief introduction to the most used
techniques, such as linear and mixed-integer programming,...
Dr
Alexandre Vaniachine
(ANL)
14/02/2006, 17:20
Grid middleware and e-Infrastructure operation
oral presentation
High energy and nuclear physics applications on computational grids require efficient
access to terabytes of data managed in relational databases. Databases also play a
critical role in grid middleware: file catalogues, monitoring, etc. Crosscutting the
computational grid infrastructure, a hyperinfrastructure of the databases emerges.
The Database Access for Secure Hyperinfrastructure...
Dr
Tommaso Boccali
(Scuola Normale Superiore and INFN Pisa)
14/02/2006, 17:30
The Reconstruction Software for the CMS detector is designed to serve multiple use
cases, from the online triggering of the High Level Trigger to the offline analysis.
The software is based on the CMS Framework, and comprises reconstruction modules
which can be scheduled independently. These produce and store event data ranging from
low-level objects to objects useful for analysis on...
Edmund Erich Widl
(Institute for High Energy Physics, Vienna)
14/02/2006, 17:40
The Inner Tracker of the CMS experiment consists of approximately 20,000 sensitive
modules in order to cope with the bunch crossing rate and the high particle
multiplicity expected in the environment of the Large Hadron Collider. For such a big
number of modules conventional methods for track-based alignment face serious
difficulties because of the large number of alignment parameters and...
Dr
Daniele - on behalf of CMS Italy Tier-1 and Tier-2's Bonacorsi
(INFN-CNAF Bologna, Italy)
14/02/2006, 17:40
Distributed Event production and processing
oral presentation
The CMS experiment is travelling its path towards the real LHC data handling by
building and testing its Computing Model through daily experience on
production-quality operations as well as in challenges of increasing complexity. The
capability to simultaneously address both these complex tasks on a regional basis -
e.g. within INFN - relies on the quality of the developed tools and...
Dr
Birger Koblitz
(CERN)
14/02/2006, 17:40
Grid middleware and e-Infrastructure operation
oral presentation
We present the AMGA (ARDA Metadata Grid Application) metadata catalog, which is a
part of the gLite middleware. AMGA provides a very lightweight metadata service as
well as basic database access functionality on the Grid. Following a brief overview
of the AMGA design, functionality, implementation and security features, we will show
performance comparisons of AMGA with direct database...
Marian Ivanov
(CERN)
14/02/2006, 17:48
An overview of the online reconstruction algorithms for the ALICE Time Projection
Chamber and Inner Tracking System is given. Both the tracking efficiency and the time
performance of the algorithms are presented in details. The application of the
tracking algorithms in possible high transverse momentum jet and open charm triggers
is discussed.
Andreas Joachim Peters
(CERN)
15/02/2006, 09:00
The LHC experiments at CERN will collect data at a rate of several petabytes per year
and produce several hundred files per second. Data has to be processed and
transferred to many tier centres for distributed data analysis in different physics
data formats increasing the amount of files to handle. All these files must be
accounted for, reliably and securely tracked in a GRID environment,...
Mr
Davide Rebatto
(INFN - MILANO)
15/02/2006, 09:00
In current, widely deployed management schemes, intensive computing farms are locally
managed by batch systems (e.g. Platform LSF, PBS/Torque, BQS, etc.). When approached
from the outside, at the global (or 'grid') level, these local resource managers
(LRMS) are seen as services providing at least a basic set of job operations, namely
submission, status retrieval, cancellation and security...
Mr
Sankhadip Sengupta
(Undergraduate student,Aerospace Engineering,IIT Kharagpur,Kharagpur,India)
15/02/2006, 09:00
This paper addresses the growing usages of high performance computing in modern
computational fluid dynamics to simulate the flow-induced vibrations of cylindrical
structures necessary to enhance the Reactor Safety in Nuclear plants. The study is
essential to prevent the damage of steam tubes causing an accident due to the release
of reactor coolant containing radioactive materials out of...
Prof.
Harvey Newman
(CalTech)
15/02/2006, 09:00
Mr
Gian Luca Rubini
(INFN-CNAF)
15/02/2006, 09:00
One of the most interesting challenges of the 'computing Grid' is how to
administer grid resources allocation and data access, in order to obtain
an effective and optimized computing usage and a secure data access. To
reach this goal, a new entity has appeared, the Virtual Organization (VO),
which represents a distributed community of users, accessing a
distributed computing environment....
Dr
Enrico Pasqualucci
(Istituto Nazionale di Fisica Nucleare (INFN), Roma)
15/02/2006, 09:00
The ATLAS DAQ and monitoring software are currently commonly used to test detectors during the
commissioning phase. In this paper, their usage in MDT and RPC commissioning is described, both at the
surface pre-commissioning and commissioning stations and in the ATLAS pit. Two main components are
heavily used for detector tests.
The ROD Crate DAQ software is based on the ATLAS ReadOut...
Toby Burnett
(University of Washington)
15/02/2006, 09:00
We have developed a package that trains and applies boosted classification trees, a
technology long used by the statistics community, but only recently being explored by
HEP.
We will discuss its design (Object-Oriented C++), and show two examples of its use:
to detect single top production in DZERO events, and for background rejection in GLAST.
Gordon Watts
(DZERO Collaboration)
15/02/2006, 09:00
Dร, one of the collider detectors at Fermilab's Tevatron, depends on efficient and
pure b-quark identification for much of its high-pT physics program. Dร currently has
two algorithms, one based on impact parameter and the other on explicit
reconstruction of the B hadrons decay vertex. A third, combined algorithm is under
development. Dร certifies all of its b-quark tagging algorithms...
Dr
Gene Oleynik
(Fermilab)
15/02/2006, 09:00
Fermilab provides a primary and tertiary permanent storage facility for its High
Energy Physics program and other world wide scientific endeavors. The lifetime of
the files in this facility, which are maintained in automated robotic tape
libraries, is typically many years. Currently the amount of data in the Fermilab
permanent store facility is 3.3 PB and growing rapidly.
The...
Dr
Alessandra Forti
(University of Manchester)
15/02/2006, 09:00
The HEP department of the University of Manchester has purchased a 1000 nodes cluster.
The cluster will be accessible to various VOs through EGEE/LCG
grid middleware. One of the interesting aspects of the equipment bought is that each
node has 2x250 GB disks leading to a total of aproximately 4TB of usable disk space.
The space is intended to be managed using dcache and its resilience...
Dr
Jose Hernandez
(CIEMAT)
15/02/2006, 09:00
CMS has chosen to adopt a distributed model for all computing in order to cope with
the requirements on computing and storage resources needed for the processing and
analysis of the huge amount of data the experiment will be providing from LHC startup.
The architecture is based on a tier-organised structure of computing resources, based
on a Tier-0 centre at CERN, a small number of...
Prof.
Sridhara Dasu
(UNIVERSITY OF WISCONSIN)
15/02/2006, 09:00
The University of Wisconsin campus research computing grid is an offshoot of Condor
project, which is providing middle ware for many world-wide computing grids. The Grid
Laboratory of Wisconsin (GLOW) and other UW based computing facilities exploit Condor
technologies to provide research computing for a variety of fields including high
energy physics projects on the UW campus. The...
Dr
Andrea Valassi
(CERN)
15/02/2006, 09:00
In April 2005, the LCG Conditions Database Project delivered the first production
release of the COOL software, providing basic functionalities for the handling of
conditions data. Since that time, several new production releases have extended the
functionalities of the software. As the project is now moving into the deployment
phase in Atlas and LHCb, its priorities are the...
Dr
Daniele Spiga
(INFN & Universitร degli Studi di Perugia)
15/02/2006, 09:00
CMS is one of the four experiments expected to take data at LHC. Order of some
PetaBytes of data per year will be stored in several computing sites all over the
world. The collaboration has to provide tools for accessing and processing the data
in a distribuited environment, using the grid infrastructure. CRAB (Cms Remote
Analysis Builder) is a user-friendly tool developed by INFN within...
Dr
Andreas Gellrich
(for the Grid team at DESY)
15/02/2006, 09:00
DESY is one of the world-wide leading centers for research with particle accelerators
and a center for research with synchrotron light. The hadron-electron collider HERA
houses four experiments which are taking data and will be operated until mid 2007.
DESY has been operating a LCG-based Grid infrastructure since 2004 which was set up
in the context of the EU e-science Project...
Santiago Gonzalez De La Hoz
(European Organization for Nuclear Research (CERN))
15/02/2006, 09:00
The ATLAS production system provides access to resources across several grid flavors.
Based on the experiences from the last data challenge the system has evolved. While
key aspect of the old system are kept (Supervisor and executors), new implementations
of the components aim for a more stable and scalable operation. An important aspect
is also the integration with the new data management...
Natalia Ratnikova
(FERMILAB)
15/02/2006, 09:00
Packaging and distribution of experiment-specific software becomes a complicated task
when the number of versions and external dependencies increases. In order to run a
single application, it is often enough to create appropriate runtime environment
that ensures availability of required shared objects and data files. The idea of
distributing software applications based on runtime...
Mr
Andrey Bobyshev
(FERMILAB)
15/02/2006, 09:00
An ACL (access control list) is one of a few tools that network administrators
are often using to limit access to various network objects, e.g. restrict access
to the certain network areas for specific traffic patterns. The ACLs are also used
to control forwarding traffic, e.g. for implementing so-called policy based routing.
Nowadays demand is to do update of ACLs dynamically by...
Dr
Jens Jensen
(Rutherford Appleton Laboratory)
15/02/2006, 09:00
The most commonly deployed library for handling Secure Sockets Layer (SSL) and
Transport Layer Security (TLS) is OpenSSL. The library is used by the client to
negotiate connections to the server. It also offers features for caching parts of
the information that is required, thus speeding up the process and the cost of
renegotiation. Those features are generally not used fully.
This...
Mr
Brian Davies
(LANCASTER UNIVERSITY), Dr
Roger JONES
(LANCAS)
15/02/2006, 09:00
The ESLEA (Exploitation of Switched Lightpaths for E-science Applications) project
has been working to put switched optical lightpath technology to the service of key
large scientific projects. Central to the activity is the provision of services to
ATLAS experiment. The project is facing the practical problems of finding the best
way of interfacing the power (but also the...
Dr
David Colling
(Imperial College London), Dr
Olivier van der Aa
(Imperial College London)
15/02/2006, 09:00
The LCG [1] have adopted a hierarchical Grid computing model which has a Tier 0
centre at CERN, national Tier 1 centres and regional Tier 2 centres. The roles of the
different Tier centres are described in the LCG Technical Design Report [2] and the
levels of service required from each level of Tier centre is described in the LCG
Memorandum of Understanding [3] . Many of the Tier 2 centres...
Ms
Natascia De Bortoli
(INFN - Naples)
15/02/2006, 09:00
Monitoring activity plays an essential role in Grid Computing: it deals with the
dynamics, variety and geographical distribution of Grid resources in order to measure
important parameters and provide relevant information of a Grid system related to
aspects such as usage, behaviour and performance. One of the basic requirements for a
monitoring service is the capability of detection and...
Dr
David Colling
(Imperial College London)
15/02/2006, 09:00
While remote control of, and data collection from, instrumentation was part of the
initial Grid concept most recent Grid developments have concentrated on the sharing
of distributed computational and storage resources. The GRIDCC project is working
to bring instrumentation back to the Grid alongside compute and storage resources.
To this end we have defined an Instrument Element (IE)...
Dr
Livio Fano'
(INFN - Universita' degli Studi di Perugia)
15/02/2006, 09:00
The CMS detector is a general purpose experiment for the LHC. At the designed maximum
luminosity more than 10**9 events/second will be produced, while the data acquisition
system will be able to manage 100 Hz bandwidth. The trigger strategy for CMS is
organised in 2 steps: a first level hardware trigger is implemented taking advantage
of the fast response dectors, as the mu-chambers and...
Dr
Lucas Taylor
(Northeastern University, Boston)
15/02/2006, 09:00
IGUANA is a well-established generic interactive visualisation framework
based on a C++ component model and open-source graphics products. We
describe developments since the last CHEP, including: the event display
toolkit, with examples from CMS and D0; the generic IGUANA visualisation
system for GEANT4; integration of ROOT and Hippoplot with IGUANA; and a
new lightweight and portable...
Mr
Andrey Bobyshev
(FERMILAB)
15/02/2006, 09:00
To satisfy the requirements of US-CMS, D0, CDF, SDSS and other experiments, Fermilab
has established an optical path to the StarLight exchange point in Chicago. It gives
access to multiple experimental networks, such as UltraScience Net, UltraLight,
UKLight, and others, with very high bandwidth capacity but generally sub- production
level service. The ongoing LambdaStation project is ...
Dr
Matthew Hodges
(RAL - CCLRC)
15/02/2006, 09:00
In preparation of the Grid for LHC start-up, and as part of the early production
service (under the UK GridPP project), we calculate efficiencies for jobs submitted
to the RAL Tier-1 Batch Farm. Early usage of the Farm was characterised by high
occupancy, but low efficiency of Grid jobs, but improvement has been observed over
the last six months. This behaviour has been examined by...
Mr
Colin Morey
(University of Manchester)
15/02/2006, 09:00
The HEP department of the University of Manchester has purchased a 1000 nodes
cluster. The cluster will be accessible to various VOs through EGEE/LCG grid
middleware. In this talk we will describe the management, security and monitoring
setup we have chosen for the administration of the cluster with minimum effort and
mostly from remote. From remote power up to centralised installation and...
Dr
Michael Gronager
(Copenhagen University)
15/02/2006, 09:00
LCG and ARC are two of the major production-ready Grid middleware solutions being
used by hundreds of HEP researchers every day. Even though the middlewares are based
on same technology, there are substantial architectural and implementational
divergencies. An ordinary user faces difficulties trying to cross the boundaries of
the two systems: ARC clients so far have not been capable...
Akram Khan
(Brunel University)
15/02/2006, 09:00
The LCG-RUS project implemented the Global Grid Forum's Resource Usage Service
standard and made grid resources for LHC accountable in a common schema (GGF-URWG).
This project is a part of UK e-Science programme with the purpose of staging grid
computing from e-Research to computational market. The LCG-RUS is a complementary
work for the precedor MCS (Market for Computational Service) RUS...
Bruno Hoeft
(Forschungszentrum Karlsruhe)
15/02/2006, 09:00
Besides a brief overview of the GridKa private and public LAN network, the
integration into the LHC-OPN network as well as the links to the T2 sites will be
presented in the view of the physical network layout as well as there higher
protocol layer implementations. Results about the feasibility discussion of
dynamical routes for all connections of FZK including all different types the...
Dr
David Evans
(FERMILAB)
15/02/2006, 09:00
The Shahkar Runtime Execution Environment Kit (ShREEK) is a threaded workflow
execution tool designed to run and intelligently manage arbitrary task workflows
within a batch job. The Kit consists of three main components, an executor that runs
tasks, a control point system to allow reordering of the workflow during execution
and a thread based pluggable monitoring framework that offers...
Mr
Andrey Shevel
(Petersburg Nuclear Physics Institute (Russia))
15/02/2006, 09:00
High Energy Physics analysis is often performed on midrange computing clusters
(10-50 machines) in relatively small physics groups (3-10 physicists). Such
clusters are usually built from commodity equipment and are running under one
of several Linux flavors. In an enviornment of limited resources, it is
important to choose "right" cluster architecture to achieve maximum performance.
We...
Dr
Iosif Legrand
(CALTECH)
15/02/2006, 09:00
The MonaLISA (Monitoring Agents in A Large Integrated Services Architecture) system
provides a distributed service for monitoring, control and global optimization of
complex grid systems and networks for high energy physics, and many other fields of
data-intensive science. It is based on an ensemble of autonomous multi-threaded,
agent-based subsystems which are registered as dynamic...
Harald Vogt
(DESY Zeuthen)
15/02/2006, 09:00
Building a software repository of simulation and reconstruction tools for a future
International Linear Collider (ILC) detector we started with applications based on
code used in the LEP experiments with Fortran and C as programming languages. All
future software development for the ILC is done using modern OO languages, mainly C++
and Java. But for comparisons and providing a smooth...
375.
Muon detector calibration in the ATLAS experiment: online data extraction and data distribution
Dr
Enrico Pasqualucci
(Istituto Nazionale di Fisica Nucleare (INFN), Roma)
15/02/2006, 09:00
In the ATLAS experiment, fast calibration of the detector is vital to feed prompt
data reconstruction with fresh calibration constants. We present the use case of the
muon detector, where an high rate of muon tracks (small data size) is needed to
accomplish calibration requirements. The ideal place to get data suitable for muon
detector calibration is the second level trigger, where the...
Dr
Barbara Mascialino
(INFN Genova), Prof.
Gerard Montarou
(Univ. Blaise Pascal Clermont-Ferrand), Dr
Maria Grazia Pia
(INFN GENOVA), Dr
Petteri Nieminen
(ESA), Prof.
Philippe Moretto
(CENBG), Dr
Riccardo Capra
(INFN Genova), Dr
Sebastien Incerti
(CENBG),
Ziad Francis
(Univ. Blaise Pascal Clermont-Ferrand)
15/02/2006, 09:00
The extension of Geant4 simulation capabilities down to the electronvolt scale
is required for precision studies of radiation effects on electronics and
detector components, and for micro-/nano-dosimetry studies in various
experimental environments.
A project is in progress to extend the coverage of Geant4 physics to this energy
range. The complexity of the problem domain is discussed...
Dr
Frank van Lingen
(CALIFORNIA INSTITUTE OF TECHNOLOGY)
15/02/2006, 09:00
Abstract: We describe a set of Web Services, created to support scientists in
performing distributed production tasks (e.g. Monte Carlo). The Web Services
described in this paper provide a portal for scientists to execute different
production workflows which can consist of many consecutive steps. The main design
goal of the Web Services discussed is to provide controlled access for...
Dr
Jiri Chudoba
(Institute of Physics, Prague)
15/02/2006, 09:00
Many computing farms use as a local batch system management PBSPro or its free
version OpenPBS, respectively Torque and Maui products. These packages are delivered
with graphical tools for a status overview, but summary and detailed reports from
accounting log files are not available. This poster describes set of tools we are
using for an overview of resources consumption in a last few...
Maria Cristina Vistoli
(Istituto Nazionale di Fisica Nucleare (INFN))
15/02/2006, 09:00
The production and analysis frameworks for LHC experiments are demanding advanced
features in the middleware functionality and a complete integration with the
experiment specific software environment. They also require an effective and
distributed test platform where the integrated middleware functionality is verified
and certified. The deployment in a production infrastructure of such...
Hans von der Schmitt
(MPI for Physics, Munich),
Rob McPherson
(University of Victoria, TRIUMF)
15/02/2006, 09:00
Commissioning of the ATLAS detector at the CERN Large Hadron Collider
(LHC) includes, as partially overlapping phases, subsystem standalone
work, integration of systems into the full detector, cosmics data taking,
single beam running and finally first collisions. These tasks require
services like DAQ with data recording to Tier0 and distributed data
management, databases,...
Mr
Leandro Franco
(IN2P3/CNRS Computing Centre)
15/02/2006, 09:00
Managing the temporal disk space used by jobs in a farm can be an operational issue.
Efforts have been put on controlling this space by the batch scheduler to make sure
the job will use at most the requested amount of space, and that this space is
cleaned up after the end of the job. ScratchFS is a virtual file system that
addresses this problem for grid as well as conventional jobs at the...
Mr
Aatos Heikkinen
(Helsinki Institute of Physics)
15/02/2006, 09:00
B tagging is an important tool for separating the LHC Higgs events with associated b
jets from the Drell-Yan background. We extend standard neural network (NN) approach
using multilayer perceptron in b tagging [1] to include self-organizing feature maps.
We demonstrate the use of the self-organizing maps (SOM_PAK program package) and the
learning vector quantization (LVQ_PAK). A...
Sanjay Ranka
(University of Florida)
15/02/2006, 09:00
Grid computing is becoming a popular way of providing high performance computing for
many data intensive, scientific applications. The execution of user applications must
simultaneously satisfy both job execution constraints and system usage policies. The
SPHINX middleware addresses both these issues. In this paper, we present performance
results of SPHINX on Open Science Grid. The...
Mr
Randolph J. Herber
(FNAL)
15/02/2006, 09:00
(For the SAMGrid Team)
SQLBuilder's purpose is to translate selection criteria in a high-level form to SQL
query statements. The internal design is intended to permit easy changes to the
selection criteria available and to permit retargeting the specific dialect of SQL
generated. The initial target language will be Oracle 9i SQL. The input language
will be defined in a formal grammar...
Luca Magnoni
(INFN - CNAF),
Riccardo Zappi
(INFN - CNAF)
15/02/2006, 09:00
LHC analysis farms - present at sites collaborating with LHC experiments - have been
used in the past for analyzing data coming from an experimentโs production center.
With time such facilities were provided with high performance storage solutions in
order to respond to the demand for big capacity and fast processing capabilities.
Today, Storage Area Network solutions are commonly deployed...
Timothy Adam Barrass
(University of Bristol)
15/02/2006, 09:00
Distributed data management at LHC scales is a staggering task, accompanied by
equally challenging practical management issues with storage systems and wide-area
networks. CMS data transfer management system, PhEDEx, is designed to handle this
task with minimum operator effort, automating the workflows from large scale
distribution of HEP experiment datasets down to reliable and scalable...
Robert GARDNER
(UNIVERSITY OF CHICAGO)
15/02/2006, 09:00
The purpose of the Teraport project is to provide computing and network
infrastructure for a university-based, multi-disciplinary, Grid-enabled analysis
platform with superior network connectivity to both domestic and international
networks. The facility is configured and managed as part of larger Grid
infrastructures, with specific focus on integration and interoperability with...
Elena Slabospitskaya
(State Res.Center of Russian Feder. Inst.f.High Energy Phys. (IFVE))
15/02/2006, 09:00
A Directed Acyclic Graph (DAG) can be used to represent a set of programs where the
input, output or execution of one or more programs is dependent on one or more other
programs. We developed a basic test suite for DAG jobs. It consists of 2 main parts:
a) functionality tests are using of CLI (in Perl). The generation of the DAG with
arbitrary structure and different JDL-attributes for...
Dr
Antonio Sidoti
(INFN Roma1 and University "La Sapienza")
15/02/2006, 09:00
The ATLAS experiment at the LHC proton-proton collider at CERN will be faced with
several technological challenges. A three level trigger and data acquisition system
has been designed to reduce the 40 MHz bunch-crossing frequency, corresponding to an
interaction rate of 1GHz at the design instantaneous luminosity to about ~100 Hz
allowed by the permanent storage system. The capability to...
Dr
Paolo Meridiani
(INFN Sezione di Roma 1)
15/02/2006, 09:00
The design goal of the CMS electromagnetic calorimeter is to reach an excellent
energy resolution; several aspects concur to the fulfillment of this ambitious goal.
An enormous quantity of hardware monitoring data will be available, together with a
laser monitoring system that will be able to follow quasi on-line the change of
transparency of the crystals due to radiation damage. This...
Dr
Alberto Ribon
(CERN), Dr
Andreas Pfeiffer
(CERN), Dr
Barbara Mascialino
(INFN Genova), Dr
Maria Grazia Pia
(INFN GENOVA), Dr
Paolo Viarengo
(IST Genova)
15/02/2006, 09:00
Statistical methods play a significant role throughout the life-cycle of high energy
physics experiments. Only a few basic tools for statistical analysis were available
in the public domain FORTRAN libraries for high energy physics. Nowadays the
situation is hardly unchanged even among the libraries of the new generation.
The present project in progress develops an object-oriented...
Stefano Veneziano
(Istituto Nazionale di Fisica Nucleare Sezione di Roma 1)
15/02/2006, 09:00
The ATLAS Level-1 Barrel system is devoted to identify muons crossing the two outer
Resistive Plate Chambers stations of the Barrel spectrometer, passing a set of
programmable pT thresholds, to find their position with a granularity of Delta EtaX
Delta Phi=0.1X0.1, and to associate them to a specific bunch crossing number. The
system sends this trigger information to the Central Trigger...
Robert Gardner
(University of Chicago)
15/02/2006, 09:00
The Midwest U.S. ATLAS Tier2 facility being deployed jointly by the University of
Chicago and Indiana University is described in terms of a set of functional
capabilities and opertional provisions in support of ATLAS managed Monte Carlo
production and distributed analysis of datasets by individual physicist-users. We
describe a two-site shared systems administration model as well as the...
Prof.
Homer Alfred Neal
(University of Michigan)
15/02/2006, 09:00
We will report on a set of studies we have conducted to assess the feasibility of
measuring the polarization of lambda_b hyperons in the CERN ATLAS experiment by
making the first successful adaptation of the generation package EvtGen for
polarized spin-1/2 particles. The simulations were based on the EvtGen
version of ATLAS, a product of ATLAS EvtGen project, reported in other ATLAS...
Dr
Lorenzo Moneta
(CERN)
15/02/2006, 09:00
Aiming to provide and support a coherent set of libraries, the mathematical
functionality of the ROOT project has been reorganized following a merge of the ROOT
and SEAL activities. Two new libraries, coded in C++, have been released in ROOT
version 5: MathCore (basic functionality) and MathMore (functionality for advanced
users). We present the structure and design of these new...
Mr
Krzysztof Wrona
(Deutsches Elektronen-Synchrotron (DESY),Germany)
15/02/2006, 09:00
The HERA luminosity upgrade and enhancements of the detector have led to considerably
increased demands on computing resources for the ZEUS experiment. In order to meet
these higher requirements, the ZEUS computing model has been extended to support
computations in the Grid environment.
We show how to use the Grid services in the production system of a real experiment
and point out the...
Heidi Alvarez
(Florida International University), Dr
Paul Avery
(University of Florida)
15/02/2006, 09:00
Florida International University (FIU), in collaboration with partners at Florida
State University (FSU), the University of Florida (UF), and the California
Institute of Technology (Caltech), in cooperation with the National Science
Foundation, are creating and operating an interregional Grid-enabled Center for
High-Energy Physics Research and Educational Outreach (CHEPREO) at FIU,...
Gordon Watts
(University of Washington)
15/02/2006, 09:00
Dร is a traditional High Energy Physics collider experiment located at the Tevatron
at Fermilab. Similar to recent past and most future experiments almost all computing
work is done on Linux using standard open source tools like the gcc compiler, the
make utility, and ROOT. I have been using the Microsoft platform for quite some time
to develop physics tools and algorithms. Once developed...
Aatos Heikkinen
(Helsinki Institute of Physics)
15/02/2006, 09:00
We present an investigation to validate Geant4 [1] Bertini cascade
nuclide production by proton- and neutron-induced reactions on various target
elements [2].
The production of residual nuclides is calculated in the framework of an
intra-nuclear cascade, pre-equilibrium, fission, and evaporation model [3].
A 132 CPU Opteron Linux cluster running the NPACI Rocks Cluster
Distribution [4,...
Dr
Alessandra Forti
(University of Manchester)
15/02/2006, 09:00
With the development of the grid and the acquisition of large clusters
to support major HEP experiments on the grid. Has triggered different requests
One is from local physicist from the major VOs to have privileged access to their
resources and the second is to support smaller groups that will never have access to
this amount of resources. Unfortunately both these categories of users up...
Mr
Francesco Maria Taurino
(CNR/INFM - INFN - Dip. di Fisica Univ. di Napoli "Federico II")
15/02/2006, 09:00
Virtualization is a methodology of dividing the resources of a computer into multiple
execution environments, by applying one or more concepts or technologies such as
hardware and software partitioning, time-sharing, partial or complete machine
simulation, emulation, quality of service, and many others. These techniques can be
used to consolidate the workloads of several under-utilized...
Andrew Hanushevsky
(Stanford Linear Accelerator Center)
15/02/2006, 09:00
Server clustering is an effective method in increasing the pool of resources
available to applications. Many clustering mechanisms exist; each with its own
strengths as well as weaknesses. This paper describes the mechanism used by xrootd to
provide a uniform data access space consisting of an unbounded number of independent
distributed servers. We show how the mechanism is especially...
Les Robertson
(CERN)
15/02/2006, 09:30
Ruth Pordes
(Fermi National Accelerator Laboratory (FNAL))
15/02/2006, 10:00
Dr
Peter Elmer
(PRINCETON UNIVERSITY)
15/02/2006, 11:15
Rene Brun
(CERN)
15/02/2006, 11:45
Dr
Rodney Walker
(SFU)
15/02/2006, 14:00
Grid middleware and e-Infrastructure operation
oral presentation
The Condor-G meta-scheduling system has been used to create a single Grid of GT2
resources from LCG and GridX1, and ARC resources from NorduGrid. Condor-G provides
the submission interfaces to GT2 and ARC gatekeepers, enabling transparent submission
via the scheduler. Resource status from the native information systems is converted
to the Condor ClassAd format and used for matchmaking to...
Dr
Eric van Herwijnen
(CERN)
15/02/2006, 14:00
LHCb has an integrated Experiment Control System (ECS), based on the commercial
SCADA system PVSS. The novelty of this control system is that, in addition to the
usual control and monitoring of all experimental equipment, it also provides control
and monitoring for software processes, namely the on-line trigger algorithms.
The trigger decisions are computed by algorithms on an event...
Dr
Andrei TSAREGORODTSEV
(CNRS-IN2P3-CPPM, MARSEILLE)
15/02/2006, 14:00
Distributed Event production and processing
oral presentation
DIRAC is the LHCb Workload and Data Management system used for Monte Carlo
production, data processing and distributed user analysis. It is designed to be light
and easy to deploy which allows integrating in a single system different kinds of
computing resources including stand-alone PC's, computing clusters or Grid systems.
DIRAC uses the paradigm of the overlay network of โPilot Agentsโ,...
Oliver Gutsche
(FERMILAB)
15/02/2006, 14:00
The CMS computing model provides reconstruction and access to recorded data of the
CMS detector as well as to Monte Carlo (MC) generated data. Due to the increased
complexity, these functionalities will be provided by a tier structure of globally
located computing centers using GRID technologies. In the CMS baseline, user access
to data is provided by the CMS Remote Analysis Builder...
Dr
Douglas Smith
(STANFORD LINEAR ACCELERATOR CENTER)
15/02/2006, 14:00
In the increasingly distributed collaborations of today's experiments, there is a
need to bring people together and manage all discussions. The main ways for doing
this on-line are the use of e-mail or web forums. HyperNews is a discussion
management system which bridges these two, by including the use of e-mail for input,
but also archiving the discussions in easy to access web pages. The...
Mr
Rohitashva Sharma
(BARC)
15/02/2006, 14:00
It is important to know the Quality of Service offered by nodes in a cluster both for
users and load balancing programs like LSF, PBS and CONDOR for submitting a job on to
a given node. This will help in achieving optimal utilization of nodes in a cluster.
Simple metrics like load average, memory utilization etc do not adequately describe
load on the nodes or Quality of Service (QoS)...
Mr
David Primor
(Tel Aviv University, ISRAEL (CERN))
15/02/2006, 14:00
This talk presents new methods to address the problem of muon track identification
in the monitored drift tube chambers (MDT) of the ATLAS Muon Spectrometer. Pattern
recognition techniques, employed by the current reconstruction software suffer when
exposed to the high background rates expected at the LHC. We propose new techniques,
exploiting existing knowledge of the detector...
Dr
GENE VAN BUREN
(BROOKHAVEN NATIONAL LABORATORY)
15/02/2006, 14:18
The Solenoid Tracker At RHIC (STAR) experiment has observed luminosity fluctuations
on time scales much shorter than expected during its design and construction. These
operating conditions lead to rapid variations in distortions of data from the STAR
TPC which are dependent upon the luminosity and planned techniques for calibrating
these distortions became insufficient to provide high...
Mr
Ashiq Anjum
(University of the West of England)
15/02/2006, 14:20
Results from and progress on the development of a Data Intensive and Network Aware
(DIANA) Scheduling engine primarily for data intensive sciences such as physics
analysis is described. Scientific analysis tasks can involve thousands of
computing, data handling, and network resources and the size of the input and
output files and the amount of overall storage space allotted to a user...
Giuseppe AVELLINO
(Datamat S.p.A.)
15/02/2006, 14:20
Grid middleware and e-Infrastructure operation
oral presentation
Contemporary Grids are characterized by a middleware that provides the necessary
virtualization of computation and data resources for the shared working environment
of the Grid. In a large-scale view, different middleware technologies and
implementations have to coexist. The SOA approach provides the needed architectural
backbone for interoperable environments, where different...
Mr
Jakub Moscicki
(CERN), Dr
Maria Grazia Pia
(INFN GENOVA), Dr
Patricia Mendez Lorenzo
(CERN), Dr
Susanna Guatelli
(INFN Genova)
15/02/2006, 14:20
Distributed Event production and processing
oral presentation
The quantitative results of a study concerning Geant4 simulation in a distributed
computing environment (local farm and LCG GRID) are presented. The architecture of
the system, based on DIANE, is presented; it allows to configure a Geant4 application
transparently for sequential execution (on a single PC), and for parallel execution
on a local PC farm or on the GRID. Quantitative results...
Dr
Witold Pokorski
(CERN)
15/02/2006, 14:20
The Geometry Description Markup Language (GDML) is a specialised XML-based language
designed as an application-independent persistent format for describing the detector
geometries. It serves to implement 'geometry trees' which correspond to the hierarchy
of volumes a detector geometry can be composed of, and to allow to identify the
position of individual solids, as well as to describe the...
Mr
Andrey Bobyshev
(FERMILAB)
15/02/2006, 14:20
High Energy Physics collaborations consist of hundreds to thousands of physicists
and are world-wide in scope. Experiments and applications now running, or starting
soon, need the data movement capabilities now available only on advanced and/or
experimental networks. The Lambda Station project steers selectable traffic through
site infrastructure and onto these "high-impact" wide-area ...
Dr
Muge Karagoz Unel
(University of Oxford)
15/02/2006, 14:20
The silicon system of the ATLAS Inner Detector consists of about 6000 modules in its
Semiconductor Tracker and Pixel Detector. Therefore, the offline global fit alignment
algorithm has to deal with solving a problem of up to 36000 degrees of freedom.32-bit
single-CPU platforms were foreseen to be unable to handle such large-size operations
needed by the algorithm. The proposed solution is...
Dr
Wainer Vandelli
(Universitร and INFN Pavia)
15/02/2006, 14:20
ATLAS is one of the four experiments under construction along the Large Hadron
Collider (LHC) ring at CERN. The LHC will produce interactions at a center of mass
energy equal to $\sqrt s~=~14~TeV$ at a $40~MHz$ rate. The detector consists of more
than 140 million electronic channels. The challenging experimental environment and
the extreme detector complexity impose the necessity of a...
Dr
Anselmo Cervera Villanueva
(University of Geneva)
15/02/2006, 14:36
RecPack is a general reconstruction toolkit, which can be used as a base for any
reconstruction program for a HEP detector. Its main functionalities are track
finding, fitting, propagation and matching. Track fitting can be done either via
conventional least squares methods or Kalman Filter techniques. The last, in
conjunction with the matching package, allows simultaneous track finding...
Abhishek Singh RANA
(University of California, San Diego, CA, USA)
15/02/2006, 14:40
Grid middleware and e-Infrastructure operation
oral presentation
We report on first experiences with building and operating an Edge Services
Framework (ESF) based on Xen virtual machines instantiated via the Workspace Service
available in Globus Toolkit, and developed as a joint project between EGEE, LCG, and
OSG. Many computing facilities are architected with their compute and storage
clusters behind firewalls. Edge Services are instantiated on a small...
Dr
Ulrik Egede
(IMPERIAL COLLEGE LONDON)
15/02/2006, 14:40
Physics analysis of large amounts of data by many users requires the usage of Grid
resources. It is however important that users can see a single environment for
developing and testing algorithms locally and for running on large data samples on
the Grid. The Ganga job wizard, developed by LHCb and ATLAS, provides physicists such
an integrated environment for job preparation, bookkeeping...
Karl Harrison
(High Energy Physics Group, Cavendish Laboratory)
15/02/2006, 14:40
Distributed Event production and processing
oral presentation
Ganga is a lightweight, end-user tool for job submission and monitoring and provides
an open framework for multiple applications and submission backends. It is developed
in a joint effort in LHCb and ATLAS. The main goal of Ganga is to effectively enable
large-scale distributed data analysis for physicists working in the LHC experiments.
Ganga offers simple, pleasant and consistent user...
Prof.
Manuel Delfino Reznicek
(Port d'Informaciรณ Cientรญfica)
15/02/2006, 14:40
Efficient hierarchical storage management of small size files continues to be a
challenge. Storing such files directly on tape-based tertiary storage leads to
extremely low operational efficiencies. Commercial tape virtualization products are
few, expensive and only proven in mainframe environments. Asking the users to deal
with the problem by โbundlingโ their files leads to a plethora of...
Mrs
Doris Burckhart
(CERN)
15/02/2006, 14:40
The Atlas Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be
comprised initially of 2000 PC nodes which take part in the control, event readout,
second level trigger and event filter operations. This high number of PCs will only
be purchased before data taking in 2007. The large CERN IT lxbatch facility provided
the opportunity to run in July 2005 online...
Dr
Cibran Santamarina Rios
(European Organization for Nuclear Research (CERN))
15/02/2006, 14:40
In this presentation we will discuss the design and functioning of a new tool that
runs the ATLAS High Level Trigger Software on Event Summary Data (ESD) files, the
format for data analysis in the experiment. An example of how to implement a sequence
of algorithms based on the electrons selection will be shown.
Dr
Maxim POTEKHIN
(BROOKHAVEN NATIONAL LABORATORY)
15/02/2006, 14:40
The STAR Collaboration is currently migrating its simulation software based on
Geant3, to the Root-based Virtual Monte Carlo Framework. One critical component of
the framework is the mechanism of the Geometry Description, which comprises both the
geometry model as used in the application, and the external language that allows the
users to define and maintain the detector configuration on...
Mr
Tapio Lampen
(HELSINKI INSTITUTE OF PHYSICS)
15/02/2006, 14:54
Modern tracking detectors are composed of a large number of modules assembled in a
hierarchy of support structures. The sensor modules are assembled in ladders or
petals. Ladders and petals in turn are assembled in cylindrical or disk-like layers
and layers are assembled to make a complete tracking device. Sophisticated
geometrical calibration is essential in these kind of detector...
Dr
Douglas Smith
(STANFORD LINEAR ACCELERATOR CENTER)
15/02/2006, 15:00
Distributed Event production and processing
oral presentation
For the BaBar computing group:
Two years ago BaBar changed from using a database event storage technology to the use
of ROOT-files. This change drastically affected the simulation production within the
experiment, as well as the bookkeeping and the distribution of the data. Despite
these large changes to production, events were produced as needed and on time for
analysis. In fact the...
Prof.
Kaushik De
(UNIVERSITY OF TEXAS AT ARLINGTON)
15/02/2006, 15:00
A new offline processing system for production and analysis, Panda, has been
developed for the ATLAS experiment and deployed in OSG. ATLAS will accrue tens of
petabytes of data per year, and the Panda design is accordingly optimized for data
intensive processing. Its development followed three years of production experience,
the lessons from which drove a markedly different design for the...
Dr
Christos Leonidopoulos
(CERN)
15/02/2006, 15:00
The Physics and Data Quality Monitoring framework (DQM) aims at providing a
homogeneous monitoring environment across various applications related to data taking
at the CMS experiment. Initially developed as a monitoring application for the 1000
dual-CPU box (High-Level) Trigger Farm, it quickly expanded its scope to accommodate
different groups across the experiment. The DQM organizes the...
Ms
Niranjani S
(Department of Information Technology, Mohamed Sathak A.J. College of Engineering, 43, Old Mahabalipuram Road, Sipcot IT Park, Egatur, Chennai - 603 103, India.)
15/02/2006, 15:00
The enormity of data obtained in scientific experiments often necessitates a suitable
graphical representation for analysis. Surface contour is one such graphical
representation which renders a pictorial view that aids in easy data interpretation.
It is essentially a two-dimensional visualization of a three-dimensional surface
plot. Very recently, it has been shown that Super Heavy...
Mr
Jeremy Herr
(University of Michigan), Dr
Steven Goldfarb
(University of Michigan)
15/02/2006, 15:00
The size and geographical diversity of the LHC collaborations present new challenges
for communication and training. The Web Lecture Archive Project (WLAP), a joint
project between the University of Michigan and CERN Academic and Technical Training,
has been involved in recording, archiving and disseminating physics lectures and
software tutorials for CERN and the ATLAS Collaboration since...
Mr
Marcus Hardt
(Unknown)
15/02/2006, 15:00
Grid middleware and e-Infrastructure operation
oral presentation
One problem in distributed computing is bringing together application developers
and resource providers to ensure that applications work well on the resources
provided. A layer of abstraction between resources and applications provides new
possibilities in designing Grid solutions.
This paper compares different virtualisation environments, among which are Xen
(developed at the...
Vakhtang Tsulaia
(UNIVERSITY OF PITTSBURGH)
15/02/2006, 15:12
This talk addresses two issues related to the implementation of a variable software
description of the ATLAS detector. The first topic is how we implement an evolving
description of an evolving ATLAS detector, including special configurations at
varying levels of realism, in a way which plugs into the simulation and
reconstruction software. The second topic is how time-dependent...
Andrei Kazarov
(Petersburg Nuclear Physics Institute (PNPI))
15/02/2006, 16:00
In order to meet the requirements of ATLAS data taking, the ATLAS Trigger-DAQ system
is composed of O(1000) of applications running on more than 2000 computers in a
network. With such system size, s/w and h/w failures are quite often. To minimize
system downtime, the Trigger-DAQ control system shall include advanced verification
and diagnostics facilities. The operator should use tests and...
Mr
Jeremy Herr
(University of Michigan), Dr
Steven Goldfarb
(University of Michigan)
15/02/2006, 16:00
The major challenges preventing the wide-scale generation of web lecture recordings
include the compactness and price of the required hardware, the speed of the
compression and posting operations, and the need for a human camera operator. We will
report on efforts that have led to major progress in addressing each of these issues.
We will describe the design, prototyping and pilot...
Dr
Hans Wenzel
(FERMILAB)
15/02/2006, 16:00
We report on the ongoing evaluation of new 64 Bit processors as they become available
to us. We present the results of benchmarking these systems in various operating
modes and also measured the power consumption. To measure the performance we use HEP
and CMS specific applications including: the analysis tool ROOT (C++), the MonteCarlo
generator Pythia (FORTRAN), OSCAR (C++) the GEANT 4...
Mr
Pavel JAKL
(Nuclear Physics Inst., Academy of Sciences - Czech Republic)
15/02/2006, 16:00
With its increasing data samples, the RHIC/STAR experiment has faced a challenging
data management dilemma: solutions using cheap disks attached to processing nodes
have rapidly become economically beneficial over standard centralized storage. At
the cost of data management, the STAR experiment moved to a multiple component
locally distributed data model rendered viable by the...
Abhishek Singh Rana
(UCSD)
15/02/2006, 16:00
Grid middleware and e-Infrastructure operation
oral presentation
Securely authorizing incoming users with appropriate privileges on distributed grid
computing resources is a difficult problem. In this paper we present the work of the
Open Science Grid Privilege Project which is a collaboration of developers from
universities and national labs to develop an authorization infrastructure to provide
finer grained authorization consistently to all grid...
Dr
Hartmut Stadie
(Deutsches Elektronen-Synchrotron (DESY), Germany)
15/02/2006, 16:00
Distributed Event production and processing
oral presentation
The detector and collider upgrades for the HERA-II running at DESY have considerably
increased the demand on computing resources for Monte Carlo production for the ZEUS
experiment. To close the gap, an automated production system capable of using Grid
resources has been developed and commissioned.
During its first year of operation, 400 000 Grid jobs were submitted by the
production...
Dr
Liliana Teodorescu
(Brunel University)
15/02/2006, 16:18
Evolutionary Algorithms, with Genetic Algorithms (GA) and Genetic Programming (GP) as
the most known versions, have a gradually increasing presence in High Energy Physics.
They were proven successful in solving problems such as regression, parameter
optimisation and event selection. Gene Expression Programming (GEP) is a new
evolutionary algorithm that combines the advantages of both GA...
Mr
Bartlomiej Pawlowski
(CERN), Mr
Nick Ziogas
(CERN), Mr
Wim Van Leersum
(Cern)
15/02/2006, 16:20
CRA is a multi layered system with a web based front end providing centralized
management and rules enforcement in a complex, distributed computing environment such
as Cern. Much like an orchestra conductor CRAโs role is essential and multi
functional. Account management, resource usage and consistency controls for every
central computing service at Cern with about 75000 active accounts is...
Mr
Philippe Canal
(FERMILAB)
15/02/2006, 16:20
Grid middleware and e-Infrastructure operation
oral presentation
We will describe the architecture and implementation of the new accounting service
for the Open Science Grid. Gratia's main goal is to provide the OSG stakeholders
with a reliable and accurate set of views of the usage of ressources across the OSG.
Gratia implements a service oriented, secure framework for the necessary collectors
and sensors. Gratia also provides repositories and access...
Mr
Carsten Germer
(DESY IT)
15/02/2006, 16:20
Taking the implementation of ZOPE/ZMS at DESY as an example we will show and discuss
various approaches and procedures to introduce a Content Management System in a HEP
Institute.
We will show how requirements were gathered to make decisions regarding software and
hardware.
How existing Systems and management procedures needed to be taken into consideration.
How the project was...
Mr
Fabrizio Furano
(INFN sez. di Padova)
15/02/2006, 16:20
The latencies induced by network communication often play a big role in reducing the
performances of systems which access big amounts of data in a distributed
environment. The problem is present in Local Area Networks, but in Wide Area Networks
is much more evident. It is generally perceived as a critical problem which makes
very difficult to get access to remote data. However, a more...
Dr
Dirk Duellmann Duellmann
(CERN IT/LCG 3D project)
15/02/2006, 16:20
Distributed Event production and processing
oral presentation
The LCG Distributed Deployment of Databases (LCG 3D) project is a joint activity
between LHC experiments and LCG tier sites to co-ordinate the set-up of database
services and facilities for relational data transfers as part of the LCG
infrastructure. The project goal is to provide a consistent way of accessing database
services at CERN tier 0 and collaborating LCG tier sites to achieve a...
Dr
Benedetto Gorini
(CERN)
15/02/2006, 16:20
This paper introduces the Log Service, developed at CERN within the ATLAS TDAQ/DCS
framework. This package remedies the long standing problem of attempting to direct
messages to the standard output and/or error in diskless nodes with no terminal. The
Log Service provides a centralized mechanism for archiving and retrieving qualified
information (Log Messages) created by TDAQ applications...
Dr
Valeri FINE
(BROOKHAVEN NATIONAL LABORATORY)
15/02/2006, 16:20
This talk presents an overview of the main components of a unique set of tools, in
use in the STAR experiment, born from the fusion of two advanced technologies: the
ROOT framework and libraries and the Qt GUI and event handling package.
Together, they allow creating software packages and help resolving complex
data-analysis or visualization problems, enhance computer simulation or help...
Dr
Christopher Jones
(CORNELL UNIVERSITY)
15/02/2006, 16:36
In order to properly understand the data taken for an HEP Event, information external
to the Event must be available. Such information includes geometry descriptions,
calibrations values, magnetic field readings plus many more. CMS has chosen a unified
approach to access to such information via a data model based on the concept of an
'Interval of Validity', IOV. This data model is...
Miguel Branco
(CERN)
15/02/2006, 16:40
Distributed Event production and processing
oral presentation
To validate its computing model, ATLAS, one of the four LHC experiments, conducted in
Q4 of 2005 a Tier-0 scaling test. The Tier-0 is responsible for prompt reconstruction
of the data coming from the event filter, and for the distribution of this data and
the results of prompt reconstruction to the tier-1s. Handling the unprecedented data
rates and volumes will pose a huge challenge on the...
Dr
Douglas Smith
(STANFORD LINEAR ACCELERATOR CENTER)
15/02/2006, 16:40
For the BaBar Computing Group:
Two years ago, the BaBar experiment changed its event store from an object oriented
database system, to one based on ROOT files. A new bookkeeping system was developed
to manage the meta-data of these files. This system has been in constant use since
that time, and has successfully provided the needed meta-data information for users'
analysis jobs,...
Mr
Adrian Casajus Ramo
(Departamento d' Estructura i Constituents de la Materia)
15/02/2006, 16:40
Grid middleware and e-Infrastructure operation
oral presentation
DIRAC is the LHCb Workload and Data Management System and is based on a
service-oriented architecture. It enables generic distributed computing with
lightweight Agents and Clients for job execution and data transfers. DIRAC code base
is 99% python with all remote requests handled using the XML-RPC protocol. DIRAC is
used for the submission of production and analysis jobs by the LHCb...
Dr
Stefan Stancu
(University of California, Irvine)
15/02/2006, 16:40
The ATLAS experiment will rely on Ethernet networks for several purposes. A control
network will provide infrastructure services and will also handle the traffic
associated with control and monitoring of trigger and data acquisition (TDAQ)
applications. Two independent data networks (dedicated TDAQ networks) will be used
exclusively for transferring the event data within the High Level...
Dr
Christopher Pinkenburg
(BROOKHAVEN NATIONAL LABORATORY)
15/02/2006, 16:40
The PHENIX experiment took 2*10^9 CuCu events and more than 7*10^9 pp events during
Run5 of RHIC. The total stored raw data volume was close to 500 TB.
Since our DAQ bandwidth allowed us to store all events selected by the low level
triggers, we did not filter events with an online processor farm which we refer to as
level 2 trigger. Instead we ran the level 2 triggers offline in the...
Vakhtang Tsulaia
(UNIVERSITY OF PITTSBURGH)
15/02/2006, 16:40
We describe an event visualization package in use in ATLAS. The package is based
upon Open Inventor and its HEPVIs extensions. It is integrated into ATLAS's analysis
framework, is modular and open to user extensions, co-displays the real detector
description/simulation (GeoModel/GEANT) geometry together with event data, and
renders in real time on regular laptop computers, using their...
Dr
Robert Bainbridge
(Imperial College London)
15/02/2006, 17:00
The CMS silicon strip tracker (SST), comprising a sensitive area of over 200m2 and
10M readout channels, is unprecedented in its size and complexity. The readout system
is based on a 128-channel analogue front-end ASIC, optical readout and an
off-detector VME board, using FPGA technology, that performs digitization, zero
suppression and data formatting before forwarding the detector data...
Abhishek Singh RANA
(University of California, San Diego, CA, USA)
15/02/2006, 17:00
We introduce gPLAZMA (grid-aware PLuggable AuthoriZation MAnagement) Architecture.
Our work is motivated by a need for fine-grain security (Role Based Access Control or
RBAC) in Storage Systems, and utilizes VOMS extended X.509 certificate specification
for defining extra attributes (FQANs), based on RFC 3281. Our implementation, the
gPLAZMA module for dCache, introduces Storage...
Dr
James Shank
(Boston University)
15/02/2006, 17:00
Distributed Event production and processing
oral presentation
We describe experiences and lessons learned from over a year of nearly continuous
running of managed production on Grid3 for the ATLAS data challenges. Two major
phases of production were peformed: the first, large scale GEANT based Monte Carlo
simulations ("DC2") were followed by extensive production for the ATLAS "Rome"
physics workshop incorporating several new job types (digitization,...
Andrew Hanushevsky
(Stanford Linear Accelerator Center)
15/02/2006, 17:00
When the BaBar experiment transitioned to using the Root Framework s new data server
architecture, xrootd, was developed to address event analysis needs. This
architecture was deployed at SLAC two years ago and since then has also been deployed
at other BaBar Tier 1 sites: IN2P3, INFN, FZK, and RAL; as well as other non-BaBar
sites: CERN (Alice), BNL (Star), and Cornell (CLEO). As part of...
Mrs
Tanya Levshina
(FERMILAB)
15/02/2006, 17:00
Grid middleware and e-Infrastructure operation
oral presentation
Currently, grid development projects require end users to be authenticated under the
auspices of a "recognized" organization, called a Virtual Organization (VO). A VO
establishes resource-usage agreements with grid resource providers. The VO is
responsible for authorizing its members and optionally assigning them to groups and
roles within the VO. This enables fine-grained authorization...
Adlene Hicheur
(Particle Physics)
15/02/2006, 17:12
The ATLAS Inner Detector is composed of a pixel detector (PIX), a silicon strip
detector (SCT) and a Transition radiation tracker (TRT). The goal of the algorithm
is to align the silicon based detectors (PIX and SCT) using a global fit of the
alignment constants. The total number of PIX and SCT silicon modules is about 35000,
leading to many challenges. The current presentation will focus...
15/02/2006, 17:20
Distributed Event production and processing
oral presentation
Within 5 years CMS expects to be managing many tens of petabytes of data in tens of
sites around the world. This represents more than orderof magnitude increase in data
volume over existing HEP experiments. This presentation will describe the underlying
concepts and architecture of the CMS model for distributed data management, including
connections to the new CMS Event Data Model. The...
Dr
Andrew McNab
(UNIVERSITY OF MANCHESTER)
15/02/2006, 17:20
Grid middleware and e-Infrastructure operation
oral presentation
GridSite has extended the industry-standard Apache webserver for use within Grid
projects, both by adding support for Grid security credentials such as GSI and VOMS,
and with the GridHTTP protocol for bulk file transfer via HTTP. We describe how
GridHTTP combines the security model of X.509/HTTPS with the performance of Apache,
in local and wide area bulk transfer applications. GridSite...
Pedro Arce
(Cent.de Investigac.Energeticas Medioambientales y Tecnol. (CIEMAT))
15/02/2006, 17:30
We describe a C++ software that is able to reconstruct the positions, angular
orientations and internal optical parameters of any optical system described by a
seamless combination of many different types of optical objects. The program also
handles the propagation of uncertainties, what makes it very useful to simulate the
system in the design phase. The software is currently in use by...
Mr
Levente HAJDU
(BROOKHAVEN NATIONAL LABORATORY)
15/02/2006, 17:40
Grid middleware and e-Infrastructure operation
oral presentation
In the distributed computing world of heterogeneity, sites may have from the bare
minimum Globus package available to a plethora of advanced services. Moreover, sites
may have restrictions and limitations which need to be understood by resource brokers
and planner in order to take the best advantage of resource and computing cycles.
Facing this reality and to take full advantage of any...
Dr
Jose Hernandez
(CIEMAT)
15/02/2006, 17:40
Distributed Event production and processing
oral presentation
(For the CMS Collaboration)
Since CHEP04 in Interlaken, the CMS experiment has developed a baseline Computing
Model and a Technical Design for the computing system it expects to need in the first
years of LHC running. Significant attention was focused on the development of a data
model with heavy streaming at the level of the RAW data based on trigger physics
selections. We expect that...
Dr
Ken Miura
(National Institute of Informatics, Japan)
16/02/2006, 09:00
Dr
Gang Chen
(IHEP, Beijing)
16/02/2006, 09:30
Dr
Piergiorgio Cerello
(INFN - TORINO)
16/02/2006, 11:00
Mathai Joseph
(Tata Research Development and Design Centre)
16/02/2006, 11:30
Dr
Rajiv Gavai
(TIFR)
16/02/2006, 12:00
Dr
Mikhail Kirsanov
(CERN)
16/02/2006, 14:00
The library of Monte Carlo generator tools maintained by LCG (GENSER) guarantees the
centralized software and physics support for the simulation of fundamental
interactions, and is currently widely adopted by the LHC collaborations.
While the activity in the LCG Phase I was mostly concentrating in the
standardization, integration and maintenance of the existing Monte Carlo...
Dr
Joel Snow
(Langston University)
16/02/2006, 14:00
Grid middleware and e-Infrastructure operation
oral presentation
Periodically an experiment will reprocess data taken previously to take advantage of
advances in its reconstruction code and improved understanding of the detector.
Within a period of ~6 months the Dร experiment has reprocessed, on the grid, a large
fraction (0.5fb-1) of the Run II data. This corresponds to some 1 billion events or
250TB of data and used raw data as input, requiring...
Dr
Steven Goldfarb
(High Energy Physics)
16/02/2006, 14:00
I report on the findings and recommendations of the LCG Project's Requirements and
Technical Assessment Group (RTAG 12) on Collaborative Tools for the LHC. A group
comprising representatives of the LHC collaborations, CERN IT and HR, and leading
experts in the field of collaborative tools evaluated the requirements of the LHC,
current practices, and expected future usage, in comparison...
Dr
Andrea Dotti
(Universitร and INFN Pisa)
16/02/2006, 14:00
ATLAS is one of the four experiments under construction along the Large Hadron
Collider ring at CERN. During the last few years much effort has gone in carrying out
test beam sessions that allowed to assess the performance of ATLAS sub-detectors.
During the data taking we have started the development of an histogram display
application designed to satisfy the needs of all ATLAS...
Jens Rehn
(CERN)
16/02/2006, 14:00
Distributed Event production and processing
oral presentation
Distributed data management at LHC scales is a staggering task, accompanied by
equally challenging practical management issues with storage systems and wide-area
networks. CMS data transfer management system, PhEDEx, is designed to handle this
task with minimum operator effort, automating the workflows from large scale
distribution of HEP experiment datasets down to reliable and scalable...
Dr
Catalin Meirosu
(CERN and "Politehnica" Bucharest)
16/02/2006, 14:00
The Trigger and Data Acquisition System of the ATLAS experiment is currently being
installed at CERN. A significant amount of computing resources will be deployed in
the Online computing system, in the close proximity of the ATLAS detector. More than
3000 high-performance computers will be supported by networks composed of about 200
Ethernet switches. The architecture of the networks was...
Dr
Giacomo Bruno
(UCL, Louvain-la-Neuve, Belgium)
16/02/2006, 14:18
At the end of 2004 CMS decided to redesign the software framework used for simulation
and reconstruction. The new design includes a completely revisited event data model.
This new software will be used in the first months of 2006 for the so called Magnet
Test Cosmic Challenge (MTCC). The MTCC is a slice test in which a small fraction of
all the CMS detection equipment is expected to be...
Mrs
Mona Aggarwal
(Imperial College London)
16/02/2006, 14:20
Grid middleware and e-Infrastructure operation
oral presentation
The LCG is an operational Grid currently running at 136 sites in 36 countries,
offering its users access to nearly 14,000 CPUs and approximately 8PB of storage [1].
Monitoring the state and performance of such a system is challenging but vital to
successful operation. In this context the primary motivation for this research is to
analyze LCG performance by doing a statistical analysis of...
Marco La Rosa
(University of Melbourne)
16/02/2006, 14:20
Distributed Event production and processing
oral presentation
In 2004 the Belle Experimental Collaboration reached a critical stage in their
computing requirements. Due to an increased rate of data collection an extremely
large amount of simulated (Monte Carlo) data was required to correctly analyse and
understand the experimental data. The resulting simulation effort consumed more CPU
power than was readily available to the experiment at the host...
Dr
Mathias de Riese
(DESY)
16/02/2006, 14:20
DESY is one of the worlds leading centers for research with particle accelerators and
synchrotron light. The computer center manages a data volume of the order of 1 PB and
houses around 1000 CPUs. During DESY's engagement as Tier-2 center for LHC
experiments these numbers will at least double. In view of these increasing
activities an improved fabric management infrastructure is being...
Dr
Andy Buckley
(Durham University),
Andy Buckley
(University of Cambridge)
16/02/2006, 14:20
Setting up the infrastructure to manage a software project can easily become more
work than writing the software itself. A variety of useful open-source tools, such as
Web-based viewers for version control systems, "wikis" for collaborative discussions
and bug-tracking systems are available but their use in high-energy physics, outside
large collaborations, is small.
We introduce the...
Dr
Ben Waugh
(University College London)
16/02/2006, 14:20
A common problem in particle physics is the requirement to reproduce comparisons
between data and theory when the theory is a (general purpose) Monte Carlo simulation
and the data are measurements of final state observables in high energy collisions.
The complexity of the experiments, the obervables and the models all contribute to
making this a highly non-trivial task.
We describe an...
Mr
Michael DePhillips
(BROOKHAVEN NATIONAL LABORATORY)
16/02/2006, 14:20
The STAR experiment at Brookhaven National Laboratory's Relativistic Heavy-Ion
Collider (RHIC) has been accumulating 100's of millions events over its already 5
years running program. Within a growing Physics demand for statistics, STAR has more
than doubled the events taken each year and is planning to increase its capability by
an order of magnitude to reach billion events capabilities...
Zdenek Maxa
(University College London)
16/02/2006, 14:36
We describe the design of Atlantis, an event visualisation program for the ATLAS
experiment at CERN, and the other supporting applications within the visualisation
project, mainly focusing on the technologies employed. The ATLAS visualisation
consists of several parts with Atlantis being the central application. The main
purpose of Atlantis is to help visually investigate and intuitively...
Lassi Tuura
(NORTHEASTERN UNIVERSITY, BOSTON, MA, USA)
16/02/2006, 14:40
Distributed Event production and processing
oral presentation
The most significant data challenge for CMS in 2005 has been the LCG service
challenge 3 (SC3). For CMS the main purpose of the challenge was to exercise a
realistic LHC startup scenario using complete experiment system, in what concerns
transferring and serving data, submitting jobs and collecting their data, employing
the next-generation world-wide LHC computing service.
A number of...
Mr
Piotr Golonka
(INP Cracow, CERN)
16/02/2006, 14:40
Solving the 'simulation=experiment' equation, which is the ultimate task of every HEP
experiment, becomes impossible without computer simulation techniques. HEP Monte
Carlo simulations, traditionally written as FORTRAN codes, became complex
computational projects: their rich physical content needs to be matched with the
software organization of the experimental collaborations to make them...
Mr
Philippe Galvez
(California Institute of Technology (CALTECH))
16/02/2006, 14:40
During this session we will describe and demonstrate the MonALISA (MONitoring Agents
using A Large Integrated Services Architecture) and the new enhanced VRVS (Virtual
Room Videoconferencing System) systems, and their integration to provide a next
generation of collaboration system called EVO. The melding of these two systems
creates a distributed intelligent system that provides an...
Piotr Golonka
(CERN, IT/CO-BE)
16/02/2006, 14:40
The control systems of the LHC experiments are built using the common commercial
product: PVSS II (from the ETM company). The JCOP Framework Project delivers a set
of common tools built on top of, or extending the functionality of, PVSS (such as
the control for widely used hardware, a Finite State Machine (FSM) toolkit, access
control management, cooling and ventilation application)...
Mr
Dirk Jahnke-Zumbusch
(DESY)
16/02/2006, 14:40
DESY operates some thousand computers, based on different operating systems. On
Servers and workstations not only the operating systems but many centrally supported
software systems are used. Most of these systems, operating and software systems come
with their own user and account management tools. Typically they do not know of each
other, which makes live harder for users, when you have...
Dr
Dirk Pleiter
(DESY)
16/02/2006, 14:40
Grid middleware and e-Infrastructure operation
oral presentation
Numerical simulations of QCD formulated on the lattice (LQCD) require a huge amount
of computational resources. Grid technologies can help to improve exploitation of
these precious resources, e.g. by sharing the produced data on a global level. The
International Lattice DataGrid (ILDG) has been founded to define the required
standards needed for a grid infrastructure to be used for...
Prof.
Stephen Watts
(Brunel University)
16/02/2006, 14:54
Visualisation of data in particle physics currently involves event displays,
histograms and scatterplots. Since 1975 there has been an explosion of techniques for
data visualisation driven by highly interactive computer systems and ideas from
statistical graphics. This field has been driven by demands for data mining of large
databases and genomics. Two key areas are direct manipulation of...
Dr
Gene VAN BUREN
(BROOKHAVEN NATIONAL LABORATORY)
16/02/2006, 15:00
Samples of data acquired by the STAR Experiment at RHIC are examined at various
stages of processing for quality assurance (QA) purposes. As STAR continues to mature
and utilize new hardware and software, it remains imperative to the experiment to
work cohesively to insure the quality of STAR data so that the collaboration may
continue to produce many new physics results in the efficient...
Go Iwai
(JST)
16/02/2006, 15:00
Grid middleware and e-Infrastructure operation
oral presentation
A new project for advanced simulation technology in radiotherapy was launched on Oct.
2003 with funding of JST (Japan Science and Technology Agency) in Japan. The project
aim is to develop an ample set of simulation package for radiotherapy based on Geant4
in collaboration between Geant4 developers and medical users. They need much more
computing power and strong security for accurate and...
Dr
Suchandra Dutta
(Scuola Normale Superiore, INFN, Pisa), Dr
Vincenzo Chiochia
(University of Zurich)
16/02/2006, 15:12
The CMS silicon tracker, consisting of about 17,000 detector modules divided into
micro-strip and pixel sensors, will be the largest silicon tracker ever realized for
high energy physics experiments. The detector performance will be monitored using
applications based on the CMS Data Quality Monitoring (DQM) framework and running on
the High-Level Trigger Farm as well as local DAQ systems....
Wolfgang Von Rueden
(CERN)
16/02/2006, 17:00
Dr
Randall Sobie
(Univeristy of Victoria)
17/02/2006, 10:00
Lalitesh Kathragadda
(Google India)
17/02/2006, 10:15
Anirban Chakrabarti
(Infosys)
17/02/2006, 10:45
Grid Computing technologies are transforming the scientific and enterprise computing
in a big way. Especially in the different verticals like Life Sciences, Energy,
Finance, there is tremendous pressure to reduce cost and enhance productivity. Grid
allows linking up as many processors, storage and/or memory of distributed computers
to make more efficient use of all available computing...
Dr
Beat Jost
(CERN)
17/02/2006, 11:15
Dr
Gabriele Cosmo
(CERN)
17/02/2006, 11:35
Dr
Lorenzo Moneta
(CERN)
17/02/2006, 11:55
Dr
Andreas Pfeiffer
(CERN)
17/02/2006, 14:30
Simon Lin, Dr
Simon Lin
(Academia Sinica Grid Computing Centre)
17/02/2006, 14:50
Mr
Markus Schulz
(CERN)
17/02/2006, 15:10
Dr
Gavin McCance
(CERN)
17/02/2006, 15:30
Fons Rademakers
(CERN)
17/02/2006, 15:50
Prof.
A. S. Kolaskar
(Pune University),
Alberto Santoro
(Instituto de Fisica), Dr
D. P. S. Seth
(Telecom Regulatory Authority of India), Prof.
Harvey B Newman
(CALIFORNIA INSTITUTE OF TECHNOLOGY), Dr
S. Ramakrishnan
(CDAC, Pune),
Viatcheslav Ilin
(Moscow State University)
17/02/2006, 17:00
Markus Elsing
(CERN)
Event processing applications
oral presentation
Over the past 3 years the ATLAS Inner Detector reconstruction software has
undergone a major redesign based on the recommendations of an internal
review in spring 2003. The new track reconstruction infrastructure is characterized
by:
- a standardized ATLAS geometry model
- a common track reconstruction data model
- a suite of common extrapolation, track fitting, vertexing and pattern...
Dr
marc dobson
(CERN)
Online Computing
poster
The ATLAS TDAQ System will be composed of 3000 processors with a few processes per
processor. The Process Manager component of the TDAQ software is responsible for
launching and controlling these processes. The main requirements are for robustness,
availability and recoverability of the system, as well as the possibiity of full
launch, control and monitoring of the TDAQ processes. This...
Dr
Francesco Delli Paoli
(INFN Padova)
Distributed Event production and processing
oral presentation
The improvements of the peak instantaneous luminosity of the Tevatron Collider will
give CDF up to 2 fb-1 of new data every year, forcing the collaboration to increase
proportionally the amount of Monte Carlo data it produces. This is in turn forcing
the CDF collaboration to move beyond the dedicated resources it is using today and
start exploiting Grid resources. Monte Carlo production...
Stephane Willocq
(University of Massachusetts)
Event processing applications
poster
The ATLAS detector, currently being installed at CERN, is designed to make precise
measurements of 14 TeV proton-proton collisions at the LHC, starting in 2007.
Arguably the clearest signatures for new physics, including the Higgs Boson and
supersymmetry, will involve the production of isolated final-stated muons. The
identification and precise reconstruction of muons are performed using a...
Plenary
oral presentation
Dr
Maarten Ballintijn
(MIT)
Distributed Data Analysis
oral presentation
The Parallel ROOT Facility, PROOF, allows one to analyze and understand very large
data sets on an interactive time scale. It makes use of the inherent parallelism in
event data and implements an architecture that optimizes I/O and CPU utilization in
heterogeneous clusters with distributed storage. We will present our experiences in
using a very large PROOF cluster in production for the...
Stephane Willocq
(University of Massachusetts)
Event processing applications
poster
The Muon Spectrometer for the Atlas experiment at the LHC is designed to identify
muons with transverse momentum greater than 3 GeV/c and measure muon momenta with
high precision up to the highest momenta expected at the LHC. The 50-micron sagitta
resolution translates into a transverse momentum resolution of 10% for muon
transverse momenta of 1 TeV/c. Precise tracking is provided by...
W. E. Brown
(FERMILAB)
Software Components and Libraries
oral presentation
As an active participant in the international C++ standardization effort, Fermilab
has contributed significant expertise toward the analysis and design of a
random-number facility suitable for incorporation into the forthcoming update to the
C++ standard. A first version of this design has been promulgated as part of a
recently-approved Technical Report issued by the C++ Working Group of...
Mr
Olivier MARTIN
(CERN (on pre-retirement program until July 2006))
Computing Facilities and Networking
oral presentation
The ongoing evolution from packet based networks to hybrid networks in research &
education (R&E) networks or what are the fundamental reasons behind the growing
gap between commercial and R&E networks
As exemplified by the Internet2 HOPI initiative
(http://networks.internet2.edu/hopi/), the new GEANT2 backbone
(http://www.dante.net/server/show/nav.00100f00d) and projects such as...