CHEP04

from 27 September 2004 to 1 October 2004 (Europe/Zurich)
Interlaken, Switzerland
Europe/Zurich timezone
Home > Contribution List
Displaying 423 contributions out of 423
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
We have developed a c++ software package, called "RecPack", which allows the reconstruction of dynamic trajectories in any experimental setup. The basic utility of the package is the fitting of trajectories in the presence of random and systematic perturbations to the system (multiple scattering, energy loss, inhomogeneous magnetic fields, etc) via a Kalman Filter fit. It also includes ... More
Presented by A. CERVERA VILLANUEVA on 30 Sep 2004 at 10:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
"Where are your Wares" Computing in the broadest sense has a long history, and Babbage (1791-1871), Hollerith (1860-1929) Zuse (1910-1995), many other early pioneers, and the wartime code breakers, all made important breakthroughs. CERN was founded as the first valve-based digital computers were coming onto the market. I will consider 50 years of Computing at CERN from the following v ... More
Presented by David WILLIAMS on 27 Sep 2004 at 09:30
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
64-Bit commodity clusters and farms based on AMD technology meanwhile have been proven to achieve a high computing power in many scientific applications. This report first gives a short introduction into the specialties of the amd64 architecture and the characteristics of two-way Opteron systems. Then results from measuring the performance and the behavior of such systems in various Particle ... More
Presented by S. WIESAND on 29 Sep 2004 at 15:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The Alice High Level Trigger (HLT) cluster is foreseen to consist of 400 to 500 dual SMP PCs at the start-up of the experiment. The software running on these PCs will consist of components communicating via a defined interface, allowing flexible software configurations. During Alice's operation the HLT has to be continuously active to avoid detector dead time. To ensure that the severa ... More
Presented by T.M. STEINBECK on 29 Sep 2004 at 18:10
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
We have developed and deployed a data grid for the processing of data from the Belle experiment, and for the production of simulated Belle data. The Belle Analysis Data Grid brings together compute and storage resources across five separate partners in Australia, and the Computing Research Centre at the KEK laboratory in Tsukuba, Japan. The data processing resouces are general purpose, ... More
Presented by G R. MOLONEY on 30 Sep 2004 at 17:50
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The paper describes a component-based framework for data stream processing that allows for configuration, tailoring, and run-time system reconfiguration. The system’s architecture is based on a pipes and filters pattern, where data is passed through routes between components. Components process data and add, substitute, and/or remove named data items from a data stream. They can also ma ... More
Presented by J. NOGIEC on 27 Sep 2004 at 14:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The standard procedures for the extraction of gravitational wave signals coming from coalescing binaries provided by the output signal of an interferometric antenna may require computing powers generally not available in a single computing centre or laboratory. A way to overcome this problem consists in using the computing power available in different places as a single geographically di ... More
Presented by S. PARDI on 27 Sep 2004 at 15:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
A vertex fit algorithm was developed based on the Gaussian-sum filter (GSF) and implemented in the framework of the CMS reconstruction program. While linear least-squares estimators are optimal in case all observation errors are Gaussian distributed, the GSF offers a better treatment of the non-Gaussian distribution of track parameter errors when these are modeled by Gaussian mixtures. ... More
Presented by Dr. T. SPEER on 30 Sep 2004 at 17:10
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
VRVS (Virtual Room Videoconferencing System) is a unique, globally scalable next-generation system for real-time collaboration by small workgroups, medium and large teams engaged in research, education and outreach. VRVS operates over an ensemble of national and international networks. Since it went into production service in early 1997, VRVS has become a standard part of the toolset used dai ... More
Presented by Mr. P. GALVEZ on 30 Sep 2004 at 17:50
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
Super-computers will be replaced more and more by PC cluster systems. Also future LHC experiments will use large PC clusters. These clusters will consist of off-the-shelf PCs, which in general are not built to run in a PC farm. Configuring, monitoring and controlling such clusters requires a serious amount of time consuming and administrative effort. We propose a cheap and easy hardwar ... More
Presented by R. PANSE on 29 Sep 2004 at 15:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The Atlas Level-2 trigger provides a software-based event selection after the initial Level-1 hardware trigger. For the muon events, the selection is decomposed in a number of broad steps: first, the Muon Spectrometer data are processed to give physics quantities associated to the muon track (standalone features extraction) then, other detector data are used to refine the extracted featu ... More
Presented by A. DI MATTIA on 27 Sep 2004 at 14:40
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The LHCb Data Challenge 04 includes the simulation of over 200 M simulated events using distributed computing resources on N sites and extending along 3 months. To achieve this goal a dedicated Production grid (DIRAC) has been deployed. We will present the Job Monitoring and Accounting services developed to follow the status of the production along its way and to evaluate the results at the e ... More
Presented by M. SANCHEZ-GARCIA on 30 Sep 2004 at 17:30
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Analyzing Grid monitoring data requires the capability of dealing with multidimensional concepts intrinsic to Grid systems. The meaningful dimensions identified in recent works are the physical dimension referring to geographical location of resources, the Virtual Organization (VO) dimension, the time dimension and the monitoring metrics dimension. In this paper, we discuss the applicatio ... More
Presented by G. RUBINI on 29 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
We present the design and performance analysis of a new event reconstruction chain deployed for analysis of STAR data acquired during the 2004 run and beyond. The creation of this new chain involved the elimination of obsolete FORTRAN components, and the development of equivalent or superior modules written in C++. The new reconstruction chain features a new and fast TPC cluster finder, a n ... More
Presented by C. PRUNEAU on 29 Sep 2004 at 18:10
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
Software Configuration Management (SCM) Patterns and the Continuous Integration method are recent and powerful techniques to enforce a common software engineering process across large, heterogeneous, rapidly changing development projects where a rapid release lifecycle is required. In particular the Continuous Integration method allows tracking and addressing problems in the software componen ... More
Presented by A. DI MEGLIO on 30 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
The High Energy Physics Group at the University of Florida is involved in a variety of projects ranging from High Energy Experiments at hadron and electron positron colliders to cutting edge computer science experiments focused on grid computing. In support of these activities members of the Florida group have developed and deployed a local computational facility which consists of several ... More
Presented by J. RODRIGUEZ on 27 Sep 2004 at 15:20
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
We present a work-in-progress system, called GUMS, which automates the processes of Grid user registration and management and supports policy-aware authorization at well. GUMS builds on existing VO management tools (LDAP VO, VOMS and VOMRS) with a local grid user management system and a site database which stores user credentials, accounting history and policies in XML format. We use VOMRS, ... More
Presented by G. CARCASSI on 29 Sep 2004 at 17:10
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
Building a state of the art high energy physics detector like CMS requires strict interoperability and coherency in the design and construction of all sub-systems comprising the detector. This issue is especially critical for the many database components that are planned for storage of the various categories of data related to the construction, operation, and maintainance of the detector ... More
on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
We describe a database solution in a web application to centrally manage the configuration information of computer systems. It extends the modular cluster management tool Quattor with a user friendly web interface. System configurations managed by Quattor are described with the aid of PAN, a declarative language with a command line and a compiler interface. Using a relational schema, we a ... More
Presented by Z. TOTEVA on 28 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The observation of high-energetic gamma-rays with ground based air cerenkov telescopes is one of the most exciting areas in modern astro particle physics. End of the year 2003 the MAGIC telescope started operation.The low energy threshold for gamma-rays together with different background sources leads to a considerable amount of data. The analysis will be done in different institutes sprea ... More
Presented by H. KORNMAYER on 27 Sep 2004 at 15:40
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
A grid system is a set of heterogeneous computational and storage resources, distributed on a large geographic scale, which belong to different administrative domains and serve several different scientific communities named Virtual Organizations (VOs). A virtual organization is a group of people or institutions which collaborate to achieve common objectives. Therefore such system has to ... More
Presented by T. COVIELLO on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
The muCap experiment at the Paul Scherrer Institut (PSI) will measure the rate of muon capture on the proton to a precision of 1% by comparing the apparent lifetimes of positive and negative muons in hydrogen. This rate may be related to the induced pseudoscalar weak form factor of the proton. Superficially, the muCap apparatus looks something like a miniature model of a collider detect ... More
Presented by F. GRAY on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
A kinematic fit package was developed based on Least Means Squared minimization with Lagrange multipliers and Kalman filter techniques and implemented in the framework of the CMS reconstruction program. The package allows full decay chain reconstruction from final state to primary vertex according to the given decay model. The class framework allowing decay tree description on every reco ... More
on 30 Sep 2004 at 16:50
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Statistical methods play a significant role throughout the life- cycle of HEP experiments, being an essential component of physics analysis. We present a project in progress for the development of an object-oriented software toolkit for statistical data analysis. More in particular, the Statistical Comparison component of the toolkit provides algorithms for the comparison of data distrib ... More
Presented by M.G. PIA on 30 Sep 2004 at 17:30
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
AIDA, Abstract Interfaces for Data Analysis, is a set of abstract interfaces for data analysis components: Histograms, Ntuples, Functions, Fitter, Plotter and other typical analysis categories. The interfaces are currently defined in Java, C++ and Python and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist), Java (Java Analysis Studio) and Pytho ... More
Presented by Victor SERBO on 27 Sep 2004 at 15:20
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
Next generation high energy physics experiments planned at the CERN Large Hadron Collider is so demanding in terms of both computing power and mass storage that data and CPU's can not be concentrated in a single site and will be distributed on a computational Grid according to a "multi-tier". LHC experiments are made of several thousands of people from a few hundreds of institutes spre ... More
Presented by G. LO RE on 30 Sep 2004 at 14:20
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
AMS-02 Computing and Ground Data Handling. V.Choutko (MIT, Cambridge), A.Klimentov (MIT, Cambridge) and M.Pohl (Geneva University) AMS (Alpha Magnetic Spectrometer) is an experiment to search in space for dark matter and antimatter on the International Space Station (ISS). The AMS detector had a precursor flight in 1998 (STS- 91, June 2-12, 1998). More ... More
Presented by A. KLIMENTOV on 29 Sep 2004 at 14:40
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end prototypes a ... More
Presented by THE ARDA TEAM on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
We describe the design and operational experience of the ATLAS production system as implemented for execution on Grid3 resources. The execution environment consisted of a number of grid-based tools: Pacman for installation of VDT-based Grid3 services and ATLAS software releases, the Capone execution service built from the Chimera/Pegasus virtual data system for directed acyclic graph (DAG ... More
Presented by M. MAMBELLI on 29 Sep 2004 at 16:50
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
In addition to the well-known challenges of computing and data handling at LHC scales, LHC experiments have also approached the scalability limit of manual management and control of the steering parameters ("primary numbers") provided to their software systems. The laborious task of detector description benefits from the implementation of a scalable relational database approach. We have ... More
on 30 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The ATLAS distributed analysis (ADA) system is described. The ATLAS experiment has more that 2000 physicists from 150 insititutions in 34 countries. Users, data and processing are distributed over these sites. ADA makes use of a collection of high-level web services whose interfaces are expressed in terms of AJDL (abstract job definition language) which includes descriptions of datasets, tra ... More
Presented by D. ADAMS on 30 Sep 2004 at 15:20
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The ATLAS Metadata Interface (AMI) project provides a set of generic tools for managing database applications. AMI has a three-tier architecture with a core that supports a connection to any RDBMS using JDBC and SQL. The middle layer assumes that the databases have an AMI compliant self-describing structure. It provides a generic web interface and a generic command line interface. The top ... More
Presented by S. ALBRAND on 29 Sep 2004 at 16:50
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
In order to validate the Offline Computing Model and the complete software suite, ATLAS is running a series of Data Challenges (DC). The main goals of DC1 (July 2002 to April 2003) were the preparation and the deployment of the software required for the production of large event samples, and the production of those samples as a worldwide distributed activity. DC2 (May 2004 until October ... More
Presented by L. GOOSSENS on 27 Sep 2004 at 14:40
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
State of the art in the field of fitting particle tracks to one vertex is the Kalman technique. This least-squares (LS) estimator is known to be ideal in the case of perfect assignment of tracks to vertices and perfectly known Gaussian errors. Experimental data and detailed simulations always depart from this perfect model. The imperfections can be expected to be larger in high luminosit ... More
Presented by W. WALTENBERGER on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
A version of the Bertini cascade model for hadronic interactions is part of the Geant4 toolkit, and may be used to simulate pion-, proton-, and neutron-induced reactions in nuclei. It is typically valid for incident energies of 10 GeV and below, making it especially useful for the simulation of hadronic calorimeters. In order to generate the intra-nuclear cascade, the code depends on ta ... More
on 27 Sep 2004 at 17:30
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The size and complexity of the present HEP experiments represents an enormous effort in the persistency of data. These efforts imply a tremendous investment in the databases field not only for the event data but also for data that is needed to qualify this one - the Conditions Data. In the present document we'll describe the strategy for addressing the Conditions data problem in the ATLAS e ... More
Presented by A. AMORIM on 29 Sep 2004 at 17:30
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The AliEn system, an implementation of the Grid paradigm developed by the ALICE Offline Project, is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The AliEn Web Portal is built around Open Source components with a backend based on Grid Services and compliant with the OGSA model. An easy and intuitive presentation layer gives the opportuni ... More
Presented by P E. TISSOT-DAGUETTE on 30 Sep 2004 at 14:20
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The BaBar experiment has accumulated many terabytes of data on particle physics reactions, accessed by a community of hundreds of users. Typical analysis tasks are C++ programs, individually written by the user, using shared templates and libraries. The resources have outgrown a single platform and a distributed computing model is needed. The grid provides the natural toolset. Howeve ... More
Presented by M. JONES on 29 Sep 2004 at 10:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
This article introduces a Embedded Linux System based on vme series PowerPC as well as the base method on how to establish the system. The goal of the system is to build a test system of VMEbus device. It also can be used to setup the data acquisition and control system. Two types of compiler are provided by the developer system according to the features of the system and the PowerPC. At ... More
Presented by M. YE on 27 Sep 2004 at 15:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The CMS detector simulation package, OSCAR, is based on the Geant4 simulation toolkit and the CMS object-oriented framework for simulation and reconstruction. Geant4 provides a rich set of physics processes describing in detail electro-magnetic and hadronic interactions. It also provides the tools for the implementation of the full CMS detector geometry and the interfaces required for recoveri ... More
Presented by M. STAVRIANAKOU on 29 Sep 2004 at 14:20
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Grid computing is a large scale geographically distributed and heterogeneous system that provides a common platform for running different grid enabled applications. As each application has different characteristics and requirements, it is a difficult task to develop a scheduling strategy able to achieve optimal performance because application-specific and dynamic system status have to ... More
Presented by T. COVIELLO on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The SAMGrid team has recently refactored its test harness suite for greater flexibility and easier configuration. This makes possible more interesting applications of the test harness, for component tests, integration tests, and stress tests. We report on the architecture of the test harness and its recent application to stress tests of a new analysis cluster at Fermilab, to explore the ... More
Presented by A. LYON on 27 Sep 2004 at 17:50
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The FLUKA Monte Carlo transport code is being used for different applications in High Energy, Cosmic Ray and Accelerator Physics. Here we review some of the ongoing projects which are based on this simulation tool. In particular, as far as accelerator physics is concerned, we wish to summarize the work in progress for the LHC and the CNGS project. From the point of view of experimental acti ... More
Presented by G. BATTISTONI on 27 Sep 2004 at 14:40
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
In this paper we will discuss how Aspect-Oriented Programming (AOP) can be used to implement and extend the functionality of HEP architectures in areas such as performance monitoring, constraint checking, debugging and memory management. AOP is the latest evolution in the line of technology for functional decomposition which includes Structured Programming (SP) and Object-Oriented Programming ... More
Presented by C. TULL on 30 Sep 2004 at 14:40
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
Aspect-Oriented Programming (AOP) is a new paradigm promising to allow further modularization of large software frameworks, like those developed in HEP. Such frameworks often manifest several orthogonal axes of contracts (Crosscutting Concerns - CC) leading to complex multidepenencies. Currently used programing languages and development methodologies don't allow to easily identify and encaps ... More
Presented by J. HRIVNAC on 30 Sep 2004 at 14:20
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
The new authentication and security services available in the ROOT framework for client/server applications will be described. The authentication scheme has been designed with the purpose to make the system complete and flexible, to fit the needs of the coming clusters and facilities. Three authentication methods have been made available: Globus/GSI, for GRID-awareness; SSH, to allow using ... More
Presented by G. GANIS on 29 Sep 2004 at 16:50
Type: poster Session: Poster Session 1
Track: Track 7 - Wide Area Networking
In a large campus network, such as Fermilab's ten thousand nodes, scanning initiated from either outside of or within the campus network raises security concerns, may have very serious impact on network performance, and even disrupt normal operation of many services. In this paper we introduce a system for detecting and automatic blocking of excessive traffic of different nature, scanning, DoS ... More
Presented by A. BOBYSHEV on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
Software testing is a difficult, time-consuming process that requires technical sophistication and proper planning. This is especially true for the large-scale software projects of High Energy Physics where constant modifications and enhancements are typical. The automated nightly testing is the important component of NICOS, NIghtly COntrol System, that manages the multi-platform nightly bui ... More
Presented by A. UNDRUS on 30 Sep 2004 at 10:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The photo injector test facility at DESY Zeuthen (PITZ) was built to develop, operate and optimize photo injectors for future free electron lasers and linear colliders. In PITZ we use a DAQ system that stores data as a collection of ROOT files, forming our database for offline analysis. Consequently, the offline analysis will be performed by a ROOT application, written at least partly by ... More
Presented by G. ASOVA on 30 Sep 2004 at 16:30
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
As any software project grows in both its collaborative and mixed codebase nature, current tools like CVS and Maven start to sag under the pressure of complex sub-project dependencies and versioning. A developer-wide failure in mastery of these tools will inevitably lead to an unrecoverable instability of a project. Even keeping a single software project stable in a large collaborative environ ... More
Presented by M. STOUFER on 30 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The BaBar experiment has migrated its event store from an objectivity based system to a system using ROOT-files, and along with this has developed a new bookkeeping design. This bookkeeping now combines data production, quality control, event store inventory, distribution of BaBar data to sites and user analysis in one central place, and is based on collections of data stored as ROOT- ... More
Presented by D. SMITH on 30 Sep 2004 at 17:10
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The BaBar experiment at SLAC studies B-physics at the Upsilon(4S) resonance using the high-luminosity e+e- collider PEP-II at the Stanford Linear Accelerator Center (SLAC). Taking, processing and analyzing the very large data samples is a significant computing challenge. This presentation will describe the entire BaBar computing chain and illustrate the solutions chosen as well as the ... More
Presented by P. ELMER on 27 Sep 2004 at 11:30
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
for the BaBar Computing Group. The analysis of the BaBar experiment requires many times the measured data to be produced in simulation. This requirement has resulted in one of the largest distributed computing projects ever completed. The latest round of simulation for BaBar started in early 2003, and completed in early 2004, and encompassed over 1 million jobs, and over 2.2 billi ... More
Presented by D. SMITH on 29 Sep 2004 at 14:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
One of the main features of the ALICE detector at LHC is the capability to identify particles in a very broad momentum range from 0.1 GeV/c up to 10 GeV/c. This can be achieved only by combining, within a common setup, several detecting systems that are efficient in some narrower and complementary momentum sub- ranges. The situation is further complicated by the amount of data to be processed ... More
Presented by I. BELIKOV on 30 Sep 2004 at 17:50
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
As ATLAS begins validation of its computing model in 2004, requirements imposed upon ATLAS data management software move well beyond simple persistence, and beyond the "read a file, write a file" operational model that has sufficed for most simulation production. New functionality is required to support the ATLAS Tier 0 model, and to support deployment in a globally distributed environment i ... More
Presented by D. MALON on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
With the improvements in CPU and disk speed over the past years, we were able to exceed the original design data logging rate of 40MB/s by a factor of 3 already for the Run 3 in 2002. For the Run 4 in 2003, we increased the raw disk logging capacity further to about 400MB/s. Another major improvement was the implementation of compressed data logging. The PHENIX raw data, after application ... More
Presented by Martin PURSCHKE on 28 Sep 2004 at 10:00
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
In this paper we describe the current state of the art in equipment, software and methods for transferring large scientific datasets at high speed around the globe. We first present a short introductory history of the use of networking in HEP, some details on the evolution, current status and plans for the Caltech/CERN/DataTAG transAtlantic link, and a description of the topology and capab ... More
Presented by Dr. S. RAVOT on 30 Sep 2004 at 14:40
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
How do we get High Throughput data transport to real users? The MB-NG project is a major collaboration which brings together expertise from users, industry, equipment providers and leading edge e-science application developers. Major successes in the areas of Quality of Service (QoS) and managed bandwidth have provided a leading edge U.K. Diffserv enabled network running at 2.5 Gbit/s. One ... More
Presented by R. HUGHES-JONES on 30 Sep 2004 at 17:10
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
As an underpinning of AFS and Windows 2000, and as a formally proven security protocol in its own right, Kerberos is ubiquitous among HEP sites. Fermilab and users from other sites have taken advantage of this and built a diversity of distributed applications over Kerberos v5. We present several projects in which this security infrastructure has been leveraged to meet the requirements of far- ... More
Presented by M. CRAWFORD on 29 Sep 2004 at 16:30
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
In the last few years grid software (middleware) has become available from various sources. However, there are no standards yet which allow for an easy integration of different services. Moreover, middleware was produced by different projects with the main goal of developing new functionalities rather than production quality software. In the context of the LHC Computing Grid project (L ... More
Presented by L. PONCET on 29 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
The Cern Advanced STORage (CASTOR) system is a scalable high throughput hierarchical storage system developed at CERN. CASTOR was first deployed for full production use in 2001 and has expanded to now manage around two PetaBytes and almost 20 million files. CASTOR is a modular system, providing a distributed disk cache, a stager, and a back end tape archive, accessible via a global logical na ... More
Presented by J-D. DURAND on 29 Sep 2004 at 16:30
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
A new, completely redesigned Condition/DB was deployed in BaBar in October 2002. It replaced the old database software used through the first three and half years of data taking. The new software aims at performance and scalability limitations of the original database. However this major redesign brought in a new model of the metadata, brand new technology- and implementation- independent ... More
Presented by I. GAPONENKO on 29 Sep 2004 at 17:50
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
CERN has about 5500 Desktop PCs. These computers offer a large pool of resources that can be used for physics calculations outside office hours. The paper describes a project to make use of the spare CPU cycles of these PCs for LHC tracking studies. The client server application is implemented as a lightweight, modular screensaver and a Web Application containing the physics job repository. ... More
Presented by A. WAGNER on 29 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
For the last 18 months CERN has collaborated closely with several industrial partners to evaluate, through the opencluster project, technology that may (and hopefully will) play a strong role in the future computing solutions, primarily for LHC but possibly also for other HEP computing environments. Unlike conventional field testing where solutions from industry are evaluated rather independen ... More
Presented by S. JARP on 29 Sep 2004 at 15:20
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Quark-gluon strings are usually fragmented on the light cone in hadrons (PITHIA, JETSET) or in small hadronic clusters which decay in hadrons (HERWIG). In both cases the transverse momentum distribution is parameterized as an unknown function. In CHIPS the colliding hadrons stretch Pomeron ladders to each other and, when the Pomeron ladders meet in the rapidity space, they create Quasmons (ha ... More
Presented by M. KOSOV on 27 Sep 2004 at 17:50
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
Supporting multiple large collaborations on shared compute farms has typically resulted in divergent requirements from the users on the configuration of these farms. As the frameworks used by these collaborations are adapted to use Grids, this issue will likely have a significant impact on the effectiveness of Grids. To address these issues, a method was developed at Lawrence Berkeley Nation ... More
Presented by S. CANON on 27 Sep 2004 at 15:40
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
CLHEP is a set of HEP-specific foundation and utility classes such as random number generators, physics vectors, and particle data tables. Although CLHEP has traditionally been distributed as one large library, the user community has long wanted to build and use CLHEP packages separately. With the release of CLHEP 1.9, CLHEP has been reorganized and enhanced to enable building and using CL ... More
Presented by Andreas PFEIFFER on 30 Sep 2004 at 16:50
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are study of known and search for new vector mesons, study of the ppbar a nnbar production cross sections in the vicinity of the threshold and sea ... More
Presented by A. ZAYTSEV on 30 Sep 2004 at 10:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The CMS Detector Description Database (DDD) consists of a C++ API and an XML based detector description language. DDD is used by the CMS simulation (OSCAR), reconstruction (ORCA), and visualization (IGUANA) as well by test beam software that relies on those systems. The DDD is a sub-system within the COBRA framework of the CMS Core Software. Management of the XML is currently done using a sepa ... More
Presented by M. CASE on 29 Sep 2004 at 17:10
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
For data analysis in an international collaboration it is important to have an efficient procedure to distribute, install and update the centrally maintained software. This is even more true when not only locally but also grid accessible resources are to be exploited. A practical solution will be presented that has been successfully employed for CMS software installations on systems ranging f ... More
Presented by K. RABBERTZ on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this subdetector (more than 50 millions channels organized in 17000 modules each one of these being a complete detector), the standard CMS visualisation tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integrati ... More
Presented by M.S. MENNEA on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
Carrot is a scripting module for the Apache webserver. Based on the ROOT framework, it has a number of powerful features, including the ability to embed C++ code into HTML pages, run interpreted and compiled C++ macros, send and execute C++ code on remote web servers, browse and analyse the remote data located in ROOT files with the web browser, access and manipulate databases, and gener ... More
Presented by Mr. V. ONUCHIN on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
A description of a Condor-based, Grid-aware batch software system configured to function asynchronously with a mass storage system is presented. The software is currently used in a large Linux Farm (2700+ processors) at the RHIC and ATLAS Tier 1 Computing Facility at Brookhaven Lab. Design, scalability, reliability, features and support issues with a complex Condor-based batch system ... More
Presented by T. WLODEK on 29 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
After successful implementation and deployment of the dCache system over the last years, one of the additional required services, the namespace service, is faced additional and completely new requirements. Most of these are caused by scaling the system, the integration with Grid services and the need for redundant (high availability) configurations. The existing system, having only an NFSv2 ... More
Presented by T. MKRTCHYAN on 27 Sep 2004 at 17:50
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
There are two cluster architecture approaches used at CERN to provide central CVS services. The first one (http://cern.ch/cvs) depends on AFS for central storage of repositories and offers automatic load-balancing and fail-over mechanisms. The second one (http://cern.ch/lcgcvs) is an N + 1 cluster based on local file systems, using data replication and not relying on AFS. It does not prov ... More
Presented by M. GUIJARRO on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
A Toolkit for Statistical Data Analysis has been recently released. Thanks to this novel software system, for the first time an ample set of sophisticated algorithms for the comparison of data distributions (goodness of fit tests) is made available to the High Energy Physics community in an open source product. The statistical algorithms implemented belong to two sets, for the comparison ... More
Presented by M.G. PIA on 30 Sep 2004 at 10:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
We present a composite framework which exploits the advantages of the CMS data model and uses a novel approach for building CMS simulation, reconstruction, visualisation and future analysis applications. The framework exploits LCG SEAL and CMS COBRA plug-ins and extends the COBRA framework to pass communications between the GUI and event threads, using SEAL callbacks to navigate through ... More
Presented by I. OSBORNE on 27 Sep 2004 at 17:30
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The LHC experiments are undertaking various data-challenges in the run-up to completion of their computing models and the submission of the experiment and of the LHC Computing Grid (LCG), Technical Design Reports(TDR) in 2005. In this talk we summarize the current working models of the LHC Computing Models, identifying their similarities and differences. We summarize the results and status of ... More
Presented by David STICKLAND on 30 Sep 2004 at 09:30
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The Belle experiment operates at the KEKB accelerator, a high luminosity asymmetric energy e+ e- machine. KEKB has achieved the world highest luminosity of 1.39 times 10^34 cm-2s-1. Belle accumulates more than 1 million B Bbar pairs in one good day. This corresponds to about 1.2 TB of raw data per day. The amount of the raw and processed data accumulated so far exceeds 1.4 PB. Belle's ... More
Presented by N. KATAYAMA on 27 Sep 2004 at 11:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The concepts and technologies applied in data acquisition systems have changed dramatically over the past 15 years. Generic DAQ components and standards such as CAMAC and VME have largely been replaced by dedicated FPGA and ASIC boards, and dedicated real-time operation systems like OS9 or VxWorks have given way to Linux- based trigger processor and event building farms. We have also seen a ... More
Presented by M. PURSCHKE on 27 Sep 2004 at 12:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
Conditions Databases are beginning to be widely used in the ATLAS experiment. Conditions data are time-varying data describing the state of the detector used to reconstruct the event data. This includes all sorts of slowly evolving data like detector alignment, calibration, monitoring and data from Detector Control System (DCS). In this paper we'll present the interfaces between the Condit ... More
Presented by D. KLOSE on 30 Sep 2004 at 10:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Wolfgang VON RUEDEN on 1 Oct 2004 at 12:25
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Lothar BAUERDICK on 1 Oct 2004 at 11:55
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The unprecedented size and complexity of the ATLAS TDAQ system requires a comprehensive and flexible control system. Its role ranges from the so-called run-control, e.g. starting and stopping the datataking, to error handling and fault tolerance. It also includes intialisation and verification of the overall system. Following the traditional approach a hierachical system of customizable ... More
Presented by D. LIKO on 29 Sep 2004 at 17:10
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
The PHENIX DAQ system is managed by a control system responsible for the configuration and monitoring of the PHENIX detector hardware and readout software. At its core, the control system, called Runcontrol, is a process that manages the various components by way of a distributed architecture using CORBA. The control system, called Runcontrol, is a set of process that manages virtually all ... More
Presented by Martin PURSCHKE on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Building on several years of sucess with the MCRunjob projects at DZero and CMS, the fermilab sponsored joint Runjob project aims to provide a Workflow description language common to three experiments: DZero, CMS and CDF. This project will encapsulate the remote processing experiences of the three experiments in an extensible software architecture using web services as a communication ... More
Presented by P. LOVE on 29 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
This paper describes the evolution of fabric management at CERN's T0/T1 Computing Center, from the selection and adoption of prototypes produced by the European DataGrid (EDG) project[1] to enhancements made to them. In the last year of the EDG project, developers and service managers have been working to understand and solve operational and scalability issues. CERN has adopted and stren ... More
Presented by G. CANCIO on 27 Sep 2004 at 14:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The D0 experiment at the Tevatron is collecting some 100 Terabytes of data each year and has a very high need of computing resources for the various parts of the physics program. D0 meets these demands by establishing a world - increasingly based on GRID technologies. Distributed resources are used for D0 MC production and data reprocessing of 1 billion events, requiring 250 TB to be transp ... More
Presented by T. HARENBERG on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
DIRAC is the LHCb distributed computing grid infrastructure for MC production and analysis. Its architecture is based on a set of distributed collaborating services. The service decomposition broadly follows the ARDA project proposal, allowing for the possibility of interchanging the EGEE/ARDA and DIRAC components in the future. Some components developed outside the DIRAC project are alread ... More
Presented by A. TSAREGORODTSEV on 30 Sep 2004 at 18:10
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The DIRAC system developed for the CERN LHCb experiment is a grid infrastructure for managing generic simulation and analysis jobs. It enables jobs to be distributed across a variety of computing resources, such as PBS, LSF, BQS, Condor, Globus, LCG, and individual workstations. A key challenge of distributed service architectures is that there is no single point of control over all c ... More
Presented by I. STOKES-REES on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The Workload Management System (WMS) is the core component of the DIRAC distributed MC production and analysis grid of the LHCb experiment. It uses a central Task database which is accessed via a set of central Services with Agents running on each of the LHCb sites. DIRAC uses a 'pull' paradigm where Agents request tasks whenever they detect their local resources are available. The collabora ... More
Presented by V. GARONNE on 29 Sep 2004 at 10:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The DZERO Collider Expermient logs many of its Data Aquisition Monitoring Information in long term storage. This information is most frequently used to understand shift history and efficiency. Approximately two kilobytes of information is stored every 15 second. We describe this system and the web interface provided. The current system is distributed, running on Linux for the back end and ... More
Presented by G. WATTS on 29 Sep 2004 at 17:30
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
Data management is one of the cornerstones in the distributed production computing environment that the EGEE project aims to provide for a European e-Science infrastructure. We have designed a set of services based on previous experience in other Grid projects, trying to address the requirements of our user communities. In this paper we summarize the most fundamental requirements and cons ... More
Presented by K. NIENARTOWICZ on 27 Sep 2004 at 17:50
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
Abstract: The D0 experiment faces many challenges enabling access to large datasets for physicists on 4 continents. The strategy of solving these problems on worlwide distributed computing clusters is followed. Already since the begin of TEvatron RunII (March 2001) all Monte-Carlo simulations are produced outside of Fermilab at remote systems. For analyses as system of regional analysis c ... More
Presented by D. WICKE on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
In common grid installations, services responsible for storing big data chunks, replication of those data and indexing their availability are usually completely decoupled. And a task of synchronizing data is passed to either user-level tools or separate services (like spiders) which are subject to failure and usually cannot perform properly if one of underlying services fails too. The Nordu ... More
Presented by O. SMIRNOVA on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivery of the data to users and processing farms based around the world has represented major challenges to both experiments. The range of applications employing databases includes data management, calibration (conditions), trigger information, run configu ... More
Presented by L. LUEKING on 29 Sep 2004 at 10:00
Type: oral presentation Session: BOF : Semantic Web applications in HEP
During a recent visit to SLAC, Tim Berners-Lee challenged the High Energy Physics community to identify and implement HEP resources to which Semantic Web technologies could be applied. This challenge comes at a time when a number of other scientific disciplines (for example, bioinformatics and chemistry) have taken a strong initiative in making information resources compatible with Sema ... More
Presented by B. WHITE on 30 Sep 2004 at 14:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
LCG2 is a large scale production grid formed by more than 40 worldwide distributed sites. The aggregated number of CPUs exceeds 3000 several MSS systems are integrated in the system. Almost all sites form an independent administrative domain. On most of the larger sites the local computing resources have been integrated into the grid. The system has been used for large scale production b ... More
Presented by M. SCHULZ on 27 Sep 2004 at 16:50
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
CDF is an experiment at the Tevatron at Fermilab. One dominating factor of the experiments' computing model is the high volume of raw, reconstructed and generated data. The distributed data handling services within SAM move these data to physics analysis applications. The SAM system was already in use at the D-Zero experiment. Due to difference in the computing model of the two experiments s ... More
Presented by S. STONJEK on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
GridICE is a monitoring service for the Grid, it measures significant Grid related resources parameters in order to analyze usage, behavior and performance of the Grid and/or to detect and notify fault situations, contract violations, and user-defined events. In its first implementation, the notification service relies on a simple model based on a pre-defined set of events. The growing int ... More
Presented by N. DE BORTOLI on 30 Sep 2004 at 16:50
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
Email is an essential part of daily work. The FNAL gateways process in excess of 700,000 messages per week. Amomng those messages are many containing viruses and unwanted spam. This paper outlines the FNAL email system configuration. We will discuss how we have defined our systems to provide optimum uptime as well as protection against viruses, spam and unauthorized users.
Presented by J. SCHMIDT on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
A proposal is made for the design and implementation of a detector-independent vertex reconstruction toolkit and interface to generic objects (VERTIGO). The first stage aims at re- using existing state-of-the-art algorithms for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Prototype candidates for the latter are a wide range of adaptive f ... More
Presented by Mr. W. WALTENBERGER on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
CMS and others LHC experiments offer a new challenge for the vertex reconstruction: the elaboration of efficient algorithms at high-luminosity beam collisions. We present here a new algorithm in the vertex finding field : Deterministic Annealing (DA). This algorithm comes from information theory by analogy to statistical physics and has already been used in clustering and classification proble ... More
Presented by Dr. E. CHABANAT on 30 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
This presentation describes the experiences and the lessons learned by the RHIC/ATLAS Computing Facility (RACF) in building and managing its 2,700+ CPU (and growing) Linux Farm over the past 6+ years. We describe how hardware cost, end-user needs, infrastructure, footprint, hardware configuration, vendor selection, software support and other considerations have played a role in the p ... More
Presented by Tomasz WLODEK on 27 Sep 2004 at 14:20
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
As a PPDG cross-team joint project, we proposed to study, develop, implement and evaluate a set of tools that allow Meta-Schedulers to take advantage of consistent information (such as information needed for complex decision making mechanisms) across both local and/or Grid Resource Management Systems (RMS). We will present and define the requirements and schema by which one can consi ... More
Presented by E. EFSTATHIADIS on 30 Sep 2004 at 17:50
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
A simultaneous track finding / fitting procedure based on Kalman filtering approach has been developed for the forward muon spectrometer of ALICE experiment. In order to improve the performance of the method in high-background conditions of the heavy ion collisions the "canonical" Kalman filter has been modified and supplemented by a "smoother" part. It is shown that the resulting "ex ... More
on 30 Sep 2004 at 10:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The main objective of the MathLib project is to give expertise and support to the LHC experiments on mathematical and statistical computational methods. The aim is to provide a coherent set of mathematical libraries. Users of this set of libraries are developers of experiment reconstruction and simulation software, of analysis tools frameworks, such as ROOT, and physicists performing data an ... More
Presented by L. MONETA on 30 Sep 2004 at 15:20
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
By 2008, the T0/T1 centre for the LHC at CERN is estimated to use about 5000 TB of disk storage. This is a very significant increase over the about 250 TB running now. In order to be affordable, the chosen technology must provide the required performance and at the same time be cost-effective and easy to operate and use. We will present an analysis of the cost (both in terms of material an ... More
Presented by H. MEINHARD on 29 Sep 2004 at 14:40
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
In March-April 2004 the CMS experiment undertook a Data Challenge(DC04). During the previous 8 months CMS undertook a large simulated event production. The goal of the challenge was to run CMS reconstruction for sustained period at 25Hz input rate, distribute the data to the CMS Tier-1 centers and analyze them at remote sites. Grid environments developed in Europe by the LHC Computing Gr ... More
Presented by A. FANFANI on 29 Sep 2004 at 15:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
The scalable serving of shared filesystems across large clusters of computing resources continues to be a difficult problem in high energy physics computing. The US CMS group at Fermilab has performed a detailed evaluation of hardware and software solutions to allow filesysystem access to data from computing systems. The goal of the evaluation was to arrive at a solution that was able to m ... More
Presented by L. LISA GIACCHETTI on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Extensive and thorough testing of the EGEE middleware is essential to ensure that a production quality Grid can be deployed on a large scale as well as across the broad range of heterogeneous resources that make up the hundreds of Grid computing centres both in Europe and worldwide. Testing of the EGEE middleware encompasses the tasks of both verification and validation. In adition we te ... More
Presented by L. GUY on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The Logging and Bookkeeping service tracks job passing through the Grid. It collects important events generated by both the grid middleware components and applications, and processes them at a chosen L&B server to provide the job state. The events are transported through secure reliable channels. Job tracking is fully distributed and does not depend on a single information source, the robust ... More
Presented by L. MATYSKA on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
We show how nowadays it is possible to achieve the goal of accuracy and fast computation response in radiotherapic dosimetry using Monte Carlo methods, together with a distributed computing model. Monte Carlo methods have never been used in clinical practice because, even if they are more accurate than available commercial software, the calculation time needed to accumulate sufficient statis ... More
Presented by M.G. PIA on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
There is a permanent quest for user friendliness in HEP Analysis. This growing need is directly proportional to the analysis frameworks' interface complexity. In fact, the user is provided with an analysis framework that makes use of a General Purpose Language to program the query algorithms. Usually the user finds this overwhelming, since he or she is presented with the complexity of the in ... More
Presented by V M. MOREIRA DO AMARAL on 30 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
As part of the ATLAS Data Challenges 2 (DC2), an automatic production system was introduced and with it a new data management component. The data management tools used for previous Data Challenges were built as separate components from the existing Grid middleware. These tools relied on a database of its own which acted as a replica catalog. With the extensive use of Grid technology expec ... More
Presented by M. BRANCO on 27 Sep 2004 at 14:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The algorithms for the detection of gravitational waves are usually very complex due to the low signal to noise ratio. In particular the search for signals coming from coalescing binary systems can be very demanding in terms of computing power, like in the case of the classical Standard Matched Filter Technique. To overcome this problem, we tested a Dynamic Matched Filter Technique, still ... More
Presented by Dr. S. PARDI on 30 Sep 2004 at 10:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The European Grid Research vision as set out in the Information Society Technologies Work Programmes of the EU's Sixth Research Framework Programme is to advance, consolidate and mature Grid technologies for widespread e-science, industrial, business and societal use. A batch of Grid research projects with 52 Million EUR EU support was launched during the European Grid Technology Days 15 - 17 ... More
Presented by Max LEMKE on 28 Sep 2004 at 12:30
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Today and in the future businesses need an intelligent network. And Enterasys has the smarter solution. Our active network uses a combination of context-based and embedded security technologies - as well as the industry’s first automated response capability - so it can manage who is using your network. Our solution also protects the entire enterprise - from the edge, through the distribu ... More
Presented by J. ROESE on 29 Sep 2004 at 11:30
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
In the evolution of computational grids, security threats were overlooked in the desire to implement a high performance distributed computational system. But now the growing size and profile of the grid require comprehensive security solutions as they are critical to the success of the endeavour. A comprehensive security system, capable of responding to any attack on grid resources, is ind ... More
Presented by S. NAQVI on 29 Sep 2004 at 14:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The event data model (EDM) of the ATLAS experiment is presented. For large collaborations like the ATLAS experiment common interfaces and data objects are a necessity to insure easy maintenance and coherence of the experiments software platform over a long period of time. The ATLAS EDM improves commonality across the detector subsystems and subgroups such as trigger, test beam reconstru ... More
Presented by Edward MOYSE on 29 Sep 2004 at 16:50
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
HEP analysis is an iterative process. It is critical that in each iteration the physicist's analysis job accesses the same information as previous iterations (unless explicitly told to do otherwise). This becomes problematic after the data has been reconstructed several times. In addition, when starting a new analysis, physicists normally want to use the most recent version of reconstruct ... More
Presented by C. JONES on 29 Sep 2004 at 16:30
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Today's computers are roughly a factor of one billion less efficient at doing their job than the laws of fundamental physics state that they could be. How much of this efficiency gain will we actually be able to harvest? What are the biggest obstacles to achieving many orders of magnitude improvement in our computing hardware, rather that the roughly factor of two we are used to seeing w ... More
Presented by Stan WILLIAMS on 29 Sep 2004 at 11:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
LCG-2 is the collective name for the set of middleware released for use on the LHC Computing Grid in December 2003. This middleware, based on LCG-1, had already several improvements in the Data Management area. These included the introduction of the Grid File Access Library(GFAL), a POSIX-like I/O Interface, along with MSS integration via the Storage Resource Manager(SRM)interface. L ... More
Presented by J-P. BAUD on 27 Sep 2004 at 15:40
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
In a Grid environment, the access to information on system resources is a necessity in order to perform common tasks such as matching job requirements with available resources, accessing files or presenting monitoring information. Thus both middleware service, like workload and data management, and applications, like monitoring tools, requiere an interface to the Grid information service w ... More
Presented by P. MENDEZ LORENZO on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
Most of the simulated events for the DZero experiment at Fermilab have been historically produced by the “remote” collaborating institutions. One of the principal challenges reported concerns the maintenance of the local software infrastructure, which is generally different from site to site. As the understanding of the community on distributed computing over distributively owned and share ... More
Presented by Rob KENNEDY on 29 Sep 2004 at 15:20
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
As modern High Energy Physics (HEP) experiments require more distributed computing power to fulfill their demands, the need for an efficient distributed online services for control, configuration and monitoring in such experiments becomes increasingly important. This paper describes the experience of using standard Common Object Request Broker Architecture (CORBA) middleware for providin ... More
Presented by S. KOLOS on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
This paper describes the deployment and configuration of the production system for ATLAS Data Challenge 2 starting in May 2004, at Brookhaven National Laboratory, which is the Tier1 center in the United States for the International ATLAS experiment. We will discuss the installation of Windmill (supervisor) and Capone (executor) software packages on the submission host and the relevant securit ... More
Presented by X. ZHAO on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
This presentation will summarise the deployment experience gained with POOL during the first larger LHC experiments data challenges performed. In particular we discuss the storage access performance and optimisations, the integration issues with grid middleware services such as the LCG Replica Location Service (RLS) and the LCG Replica Manager and experience with the POOL proposed way ... More
Presented by Maria GIRONE on 29 Sep 2004 at 14:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
A sizeable increase in the machine luminosity of KEKB accelerator is expected in coming years. This may result in a shortage in the data storage resource for the Belle experiment in the near future and it is desired to reduce the data flow as much as possible before writing the data to the storage device. For this purpose, a realtime event reconstruction farm has been installed in the Belle ... More
Presented by R. ITOH on 29 Sep 2004 at 14:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
The adoption of a rigorous software process is well known to represent a key factor for the quality of the software product and the most effective usage of the human resources available to a software project. The Unified Process, in particular its commercial packaging known as the RUP (Rational Unified Process) has been one of the most widely used software process models in the software i ... More
Presented by M.G. PIA on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
The NGOP Monitoring Project at FNAL has developed a package which has demonstrated the capability to efficiently monitor tens of thousands of entities on thousands of hosts, and has been in operation for over 4 years. The project has met the majority of its initial reqirements, and also the majority of the requirements discovered along the way. This paper will describe what worked, and wha ... More
Presented by J. FROMM on 28 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The NorduGrid middleware, ARC, has integrated support for querying and registering to Data Indexing services such as the Globus Replica Catalog and Globus Replica Location Server. This support allows one to use these Data Indexing services for for example brokering during job-submission, automatic registration of files and many other things. This integrated support is complemented by a set of ... More
Presented by O. SMIRNOVA on 27 Sep 2004 at 15:20
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end prototypes al ... More
Presented by Birger KOBLITZ on 29 Sep 2004 at 15:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The management of Application and Experiment Software represents a very common issue in emerging grid-aware computing infrastructures. While the middleware is often installed by system administrators at a site via customized tools that serve also for the centralized management of the entire computing facility, the problem of installing, configuring and validating Gigabytes of Virtual Organiza ... More
Presented by R. SANTINELLI on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The Electron Gamma Shower (EGS) Code System at SLAC is designed to simulate the flow of electrons, positrons and photons through matter at a wide range of energies. It has a large user base among the high-energy physics community and is often used as a teaching tool through a Web interface that allows program input and output. Our work aims to improve the user interaction and shower visual ... More
Presented by B. WHITE on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
An object-oriented FAst MOnte-Carlo Simulation (FAMOS) has recently been developed for CMS to allow rapid analyses of all final states envisioned at the LHC while keeping a high degree of accuracy for the detector material description and the related particle interactions. For example, the simulation of the material effects in the tracker layers includes charged particle energy loss by ionizat ... More
Presented by Dr. F. BEAUDETTE on 27 Sep 2004 at 14:20
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Typical central Au-Au collision in the CBM experiment (GSI, Germany) will produce up to 700 tracks in the inner tracker. Large track multiplicity together with presence of nonhomogeneous magnetic field make reconstruction of events complicated. A cellular automaton method is used to reconstruct tracks in the inner tracker. The cellular automaton algorithm creates short track segments in n ... More
Presented by I. KISEL on 30 Sep 2004 at 14:20
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
We present a set of algorithms for fast pattern recognition and track reconstruction using 3D space points aimed for the High Level Triggers (HLT) of multi-collision hadron collider environments. At the LHC there are several interactions per bunch crossing separated along the beam direction, z. The strategy we follow is to (a) identify the z-position of the interesting interaction prior t ... More
Presented by Dr. N. KONSTANTINIDIS on 29 Sep 2004 at 15:40
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The BTeV experiment, a proton/antiproton collider experiment at the Fermi National Accelerator Laboratory, will have a trigger that will perform complex computations (to reconstruct vertices, for example) on every collision (as opposed to the more traditional approach of employing a first level hardware based trigger). This trigger requires large-scale fault adaptive embedded software: with ... More
Presented by P. SHELDON on 29 Sep 2004 at 14:40
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
A large number of Grids have been developed, motivated by geo-political or application requirements. Despite being mostly based on the same underlying middleware, the Globus Toolkit, they are generally not inter-operable for a variety of reasons. We present a method of federating those disparate grids which are based on the Globus Toolkit, together with a concrete example of interfacing the ... More
Presented by R. WALKER on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The LHCb experiment needs to store all the information about the datasets and their processing history of recorded data resulting from particle collisions at the LHC collider at CERN as well as of simulated data. To achieve this functionality a design based on data warehousing techniques was chosen, where several user-services can be implemented and optimized individually without losing ... More
Presented by C. CIOFFI on 27 Sep 2004 at 17:30
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
A high performance system has been assembled using standard web components to deliver database information to a large number (thousands?) of broadly distributed clients. The CDF Experiment at Fermilab is building processing centers around the world imposing a high demand load on their database repository. For delivering read-only data, such as calibrations, trigger information and run condit ... More
Presented by L. LUEKING on 27 Sep 2004 at 14:40
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The STAR Collaboration is currently using simulation software based on Geant 3. The emergence of the new Monte Carlo simulation packages, coupled with evolution of both STAR detector and its software, requires a drastic change of the simulation framework. We see the Virtual Monte Carlo (VMC) approach as providing a layer of abstraction that facilitates such transition. The VMC platform is ... More
Presented by M. POTEKHIN on 29 Sep 2004 at 15:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
We describe a Java toolkit for full event reconstruction and analysis. The toolkit is currently being used for detector design and physics analysis for a future linear e+ e- linear collider. The components are fully modular and are available for tasks from digitization of tracking detector signals through to cluster finding, pattern recognition, fitting, jetfinding, and analysis. We discus ... More
Presented by N. GRAF on 30 Sep 2004 at 14:40
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
In 1995 I predicted that the dual-processor PC would start invading HEP computing and a couple of years later the x86-based PC was omnipresent in our computing facilities. Today, we cannot imagine HEP computing without thousands of PCs at the heart. This talk will look at some of the reasons why we may one day be forced to leave this sweet-spot. This would be not because we (the HEP community) ... More
Presented by S. JARP on 28 Sep 2004 at 10:00
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
A key feature of Grid systems is the sharing of its resources among multiple Virtual Organizations (VOs). The sharing process needs a policy framework to manage the resource access and usage. Generally Policy frameworks exist for farms or local systems only, but now, for Grid environments, a general, and distributed policy system is necessary. Generally VOs and local systems have contr ... More
on 29 Sep 2004 at 17:50
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
Computational and data grids are now entering a more mature phase where experimental test-beds are turned into production quality infrastructures operating around the clock. All this is becoming true both at national level, where an example is the Italian INFN production grid (http://grid-it.cnaf.infn.it), and at the continental level, where the most strinking example is the European Union EGE ... More
Presented by R. BARBERA on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
GROSS (GRidified Orca Submission System) has been developed to provide CMS end users with a single interface for running batch analysis tasks over the LCG-2 Grid. The main purpose of the tool is to carry out job splitting, preparation, submission, monitoring and archiving in a transparent way which is simple to use for the end user. Central to its design has been the requirement for allowing ... More
Presented by H. TALLINI on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The study of the effects of space radiation on astronauts in an important concern of space missions for the exploration of the Solar System. The radiation hazard to crew is critical to the feasibility of interplanetary manned missions. To protect the crew, shielding must be designed, the environment must be anticipated and monitored, and a warning system must be put in place. A Geant4 s ... More
Presented by S. GUATELLI on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Geant4 is relied upon in production for increasing number of HEP experiments and for applications in several other fields. Its capabilities continue to be extended, as its performance and modelling are enhanced. This presentation will give an overview of recent developments in diverse areas of the toolkit. These will include, amongst others, the optimisation for complex setups usi ... More
Presented by Dr. J. APOSTOLAKIS on 27 Sep 2004 at 15:20
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Most HENP experiment software includes a logging or tracing API allowing for displaying in a particular format important feedback coming from the core application. However, inserting log statements into the code is a low-tech method for tracing the program execution flow and often leads to a flood of messages in which the relevant ones are occluded. In a distributed computing environment, ... More
Presented by V. FINE on 29 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Genetic programming is a machine learning technique, popularized by Koza in 1992, in which computer programs which solve user-posed problems are automatically discovered. Populations of programs are evaluated for their fitness of solving a particular problem. New populations of ever increasing fitness are generated by mimicking the biological processes underlying evolution. These processes ar ... More
Presented by E. VAANDERING on 30 Sep 2004 at 18:10
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
Gfarm v2 is designed for facilitating reliable file sharing and high-performance distributed and parallel data computing in a Grid across administrative domains by providing a Grid file system. A Grid file system is a virtual file system that federates multiple file systems. It is possible to share files or data by mounting the virtual file system. This paper discusses the design and im ... More
Presented by O. TATEBE on 27 Sep 2004 at 17:10
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The ALICE experiment and the ROOT team have developed a Grid-enabled version of PROOF that allows efficient parallel processing of large and distributed data samples. This system has been integrated with the ALICE-developed AliEn middleware. Parallelism is implemented at the level of each local cluster for efficient processing and at the Grid level, for optimal workload management of distrib ... More
Presented by F. RADEMAKERS on 29 Sep 2004 at 15:20
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
For very large projects like the LHC Computing Grid Project (LCG) involving 8,000 scientists from all around the world, it is an indispensable requirement to have a well organized user support. The Institute for Scientific Computing at the Forschungszentrum Karlsruhe started implementing a Global Grid User Support (GGUS) after official assignment of the Grid Deployment Board in March 2003. ... More
Presented by T. ANTONI on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
To maximize the physics potential of the data currently being taken, the CDF collaboration at Fermi National Accelerator Laboratory has started to deploy user analysis computing facilities at several locations throughout the world. Over 600 users are signed up and able to submit their physics analysis and simulation applications directly from their desktop or laptop computers to these facil ... More
Presented by A. SILL on 30 Sep 2004 at 18:10
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The GSI online-offline analysis system Go4 is a ROOT based framework for medium energy ion- and nuclear physics experiments. Its main features are a multithreaded online mode with a non-blocking Qt GUI, and abstract user interface classes to set up the analysis process itself which is organised as a list of subsequent analysis steps. Each step has its own event objects and a processor inst ... More
Presented by H. ESSEL on 27 Sep 2004 at 15:40
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The installation and configuration of LCG middleware, as it is currently being done, is complex and delicate. An “accurate” configuration of all the services of LCG middleware requires a deep knowledge of the inside dynamics and hundreds of parameters to be dealt with. On the other hand, the number of parameters and flags that are strictly needed in order to run a working ”default ... More
Presented by A. RETICO on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
GraXML is the framework for manipulation and visualization of 3D geometrical objects in space. The full framework consists of the GraXML toolkit, libraries implementing Generic and Geometric Models and end-user interactive front-ends. GraXML Toolkit provides a foundation for operations on 3D objects (both detector elements and events). Each external source of 3D data is automatically translat ... More
Presented by J. HRIVNAC on 30 Sep 2004 at 10:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
In this talk, we will discuss the future of storage systems. In particular, we will focus on several big challenges which we are facing in storage, such as being able to build, manage and backup really massive storage systems, being able to find information of interest, being able to do long-term archival of data, and so on. We also present ideas and research being done to address these ch ... More
Presented by Jai MENON on 29 Sep 2004 at 09:30
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
Nuclear and High Energy Physics experiments such as STAR at BNL are generating millions of files with PetaBytes of data each year. In most cases, analysis programs have to read all events in a file in order to find the interesting ones. Since most analyses are only interested in some subsets of events in a number of files, a significant portion of the computer time is wasted on readi ... More
Presented by K. WU on 30 Sep 2004 at 17:10
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
This paper reports on the deployment experience of the defacto grid information system, Globus MDS, in a large scale production grid. The results of this experience led to the development of an information caching system based on a standard openLDAP database. The paper then describes how this caching system was developed further into a production quality information system including a gen ... More
Presented by L. FIELD on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
In this paper we report on the implementation of an early prototype of distributed high-level services supporting grid-enabled data analysis within the LHC physics community as part of the ARDA project within the context of the GAE (Grid Analysis Environment) and begin to investigate the associated complex behaviour of such an end-to-end system. In particular, the prototype integrates a ... More
Presented by F. VAN LINGEN on 30 Sep 2004 at 15:40
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The aim of Grid computing is to enable the easy and open sharing of resources between large and highly distributed communities of scientists and institutes across many independent administrative domains. Convincing site security officers and computer centre managers to allow this to happen in view of today's ever-increasing Internet security problems is a major challenge. Convincing users ... More
Presented by David KELSEY on 28 Sep 2004 at 11:30
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
Grid computing involves the close coordination of many different sites which offer distinct computational and storage resources to the Grid user community. The resources at each site need to be monitored continuously. Static and dynamic site information need to be presented to the user community in a simple and efficient manner. This paper will present both the design and implementation ... More
Presented by M. MAMBELLI, B K. KIM on 30 Sep 2004 at 15:40
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The U.S. Trillium Grid projects in collaboration with High Energy Experiment groups from the Large Hadron Collider (LHC), ATLAS and CMS, Fermi-Lab's BTeV, members of the LIGO , SDSS collaborations and groups from other scientific disciplines and computational centers have deployed a multi-VO, application-driven grid laboratory ("Grid3"). The grid laboratory has sustained for several months ... More
on 28 Sep 2004 at 09:30
Type: oral presentation Session: Plenary
Track: Plenary Sessions
This talk gives a brief overview of recent development of high performance computing and Grid initiatives in the Nordic region. Emphasis will be placed on the technology and policy demands posed by the integration of general purpose supercomputing centers into Grid environments. Some of the early experiences of bridging national eBorders in the Nordic region will also be presented. Rather t ... More
Presented by Bo Anders YNNERMAN on 30 Sep 2004 at 11:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
Designing a usable, visually-attractive GUI is somewhat more difficult than it appears at a first glance. The users, the GUI designers and the programmers are three important parts involved in this process and everyone has a comprehensive view on the aspects of the application goals, as well as the steps that have to be taken to meet successfully the application requirements. The fundamental G ... More
Presented by I. ANTCHEVA on 30 Sep 2004 at 14:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
During the years 2000 and 2001 the HERA machine and the H1 experiment performed substantial luminosity upgrades. To cope with the increased demands on data handling an effort was made to redesign and modernize the analysis software. Main goals were to lower turn-around time for physics analysis by providing a single framework for data storage, event selection, physics analysis and even ... More
Presented by Dr. J. KATZY on 29 Sep 2004 at 17:50
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The European DataGrid (EDG) project ran from 2001 to 2004, with the aim of producing middleware which could form the basis of a production Grid, and of running a testbed to demonstrate the middleware. HEP experiments (initially the four LHC experiments and subsequently BaBar and D0) were involved from the start in specifying requirements, and subsequently in evaluating the performance of t ... More
Presented by S. BURKE on 27 Sep 2004 at 16:30
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
Project SETI@HOME has proven to be one of the biggest successes of distributed computing during the last years. With a quite simple approach SETI manages to process huge amounts of data using a vast amount of distributed computer power. To extend the generic usage of these kinds of distributed computing tools, BOINC (Berkeley Open Infrastructure for Network Computing) is being developed. I ... More
on 30 Sep 2004 at 14:00
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
A High Energy Physics experiment has between 200 and 1000 collaborating physicists from nations spanning the entire globe. Each collaborator brings a unique combination of interests, and each has to search through the same huge heap of messages, research results, and other communication to find what is useful. Too much scientific information is as useless as too little. It is time consumi ... More
Presented by Mr. G. ROEDIGER on 30 Sep 2004 at 17:30
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
The migration of the Harp data and software from an Objectivity- based to an Oracle-based data storage solution is reviewed in this presentation. The project, which was successfully completed in January 2004, involved three distinct phases. In the first phase, which profited significantly from the previous COMPASS data migration project, 30 TB of Harp raw event data were migrated in two w ... More
Presented by A. VALASSI on 30 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
SAM was developed as a data handling system for Run II at Fermilab. SAM is a collection of services, each described by metadata. The metadata are modeled on a relational database, and implemented in ORACLE. SAM, originally deployed in production for the D0 Run II experiment, has now been also deployed at CDF and is being commissioned at MINOS. This illustrates that the metadata decompositi ... More
on 29 Sep 2004 at 16:30
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
In the past year, BaBar has shifted from using Objectivity to using ROOT I/O as the basis for our primary event store. This shift required a total reworking of Kanga, our ROOT-based data storage format. We took advantage of this opportunity to ease the use of the data by supporting multiple access modes that make use of many of the analysis tools available in ROOT. Specifically, our new e ... More
Presented by Dr. M. STEINKE on 29 Sep 2004 at 17:10
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services

Presented by Richard MOUNT on 29 Sep 2004 at 17:10
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
This paper describes recent developments in the IGUANA (Interactive Graphics for User ANAlysis) project. IGUANA is a generic framework and toolkit, used by CMS and D0, to build a variety of interactive applications such as detector and event visualisation and interactive GEANT3 and GEANT4 browsers. IGUANA is a freely available toolkit based on open-source components including Qt, OpenInvent ... More
Presented by S. MUZAFFAR on 30 Sep 2004 at 15:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
CHEP 2004 conference is using the Integrated Digital Conferencing product to manage part of its web site and processes to run the conference. This software has been built in the framework of InDiCo European Project. It is designed to be generic and extensible with the goal of providing help for single seminars as well as large conferences management. Partly developped at CERN within the ... More
Presented by T. BARON on 30 Sep 2004 at 10:00
Session: INTAS
INTAS ( http://www.intas.be): International Association for the promotion of co-operation with scientists from the New Independent States of the former Soviet Union (NIS). INTAS encourages joint activities between its INTAS Members and the NIS in all exact and natural sciences, economics, human and social sciences. INTAS supports a number of NIS participants to attend the 2004 Computi ... More
on 30 Sep 2004 at 16:30
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
IceCube is a cubic kilometer-scale neutrino telescope under construction at the South Pole. The minimalistic nature of the instrument poses several challenges for the software framework. Events occur at random times, and frequently overlap, requiring some modifications of the standard event-based processing paradigm. Computational requirements related to modeling the detector medium necessi ... More
Presented by T. DEYOUNG on 27 Sep 2004 at 17:50
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
A fundamental part of software development is to detect and analyse weak spots of the programs to guide optimisation efforts. We present a brief overview and usage experience on some of the most valuable open- source tools such as valgrind and oprofile. We describe their main strengths and weaknesses as experienced by the CMS experiment. As we have found that these tools do not satisfy all ... More
Presented by G. EULISSE on 30 Sep 2004 at 18:10
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The ATLAS experiment at the Large Hadron Collider (LHC) will face the challenge of efficiently selecting interesting candidate events in pp collisions at 14 TeV center- of-mass energy, whilst rejecting the enormous number of background events, stemming from an interaction rate of about 10^9 Hz. The Level-1 trigger will reduce the incoming rate to around O(100 kHz). Subsequently, the High-L ... More
Presented by Manuel DIAS-GOMEZ on 29 Sep 2004 at 16:30
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
The HEP experiments that use the regional center GridKa will handle large amounts of data. Traditional access methods via local disks or large network storage servers show limitations in size, throughput or data management flexibility. High speed interconnects like Fibre Channel, iSCSI or Infiniband as well as parallel file systems are becoming increasingly important in large cluster insta ... More
Presented by J. VANWEZEL on 27 Sep 2004 at 16:50
Type: oral presentation Session: Plenary
Track: Plenary Sessions
As Fermilab's representatives to the C++ standardization effort, we have been promoting directions of special interest to the physics community. We here report on selected recent developments toward the next revision of the C++ Standard. Topics will include standardization of random number and special function libraries, as well as core language issues promoting improved run-time performance. ... More
Presented by M. PATERNO on 30 Sep 2004 at 08:30
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
The "gridification" of a computing farm is usually a complex and time consuming task. Operating system installation, grid specific software, configuration files customization can turn into a large problem for site managers. This poster introduces InGRID, a solution used to install and maintain grid software on small/medium size computing farms. Grid elements installation with InGRID consists ... More
Presented by F.M. TAURINO on 28 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
Distributed physics analysis techniques as provided by the rootd and proofd concepts require a fast and efficient interconnect between the nodes. Apart from the required bandwidth the latency of message transfers is important, in particular in environments with many nodes. Ethernet is known to have large latencies, between 30 and 60 micro seconds for the common Giga-bit Ethernet. The Inf ... More
Presented by A. HEISS on 29 Sep 2004 at 15:40
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The R-GMA (Relational Grid Monitoring Architecture) was developed within the EU DataGrid project, to bring the power of SQL to an information and monitoring system for the grid. It provides producer and consumer services to both publish and retrieve information from anywhere within a grid environment. Users within a Virtual Organization may define their own tables dynamically into which ... More
on 30 Sep 2004 at 15:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
DESY is one of the world-wide leading centers for research with particle accelerators and a center for research with synchrotron light. The hadron-electron collider HERA houses four experiments which are taking data and will be operated until 2006 at least. The computer center manages a data volumes of order 1 PB and is the home for around 1000 CPUs. In 2003 DESY started to set up a Gri ... More
Presented by A. GELLRICH on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
These are several on-going experiments at IHEP, such as BES, YBJ, and CMS collaboration with CERN. each experiment has its own computing system, these computing systems run separately. This leads to a very low CPU utilization due to different usage period of each experiment. The Grid technology is a very good candidate for integrating these separate computing systems into a "single image ... More
Presented by G. SUN on 28 Sep 2004 at 10:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The ATLAS collaboration had a Combined Beam Test from May until October 2004. Collection and analysis of data required integration of several software systems that are developed as prototypes for the ATLAS experiment, due to start in 2007. Eleven different detector technologies were integrated with the Data Acquisition system and were taking data synchronously. The DAQ was integrated with ... More
Presented by M. DOBSON on 27 Sep 2004 at 17:50
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The aim of the service is to allow fully distributed analysis of large volumes of data while maintaining true (sub-second) interactivity. All the Grid related components are based on OGSA style Grid services, and to the maximum extent uses existing Globus Toolkit 3.0 (GT3) services. All transactions are authenticated and authorized using GSI (Grid Security Infrastructure) mechanism - part ... More
Presented by T. JOHNSON on 30 Sep 2004 at 17:30
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The transportation of ions in matter is subject of much interest in not only high-energy ion-ion collider experiments such as RHIC and LHC but also many other field of science, engineering and medical applications. Geant4 is a tool kit for simulation of passage of particles through matter and its OO designs makes it easy to extend its capability for ion transports. To simulate ions inter ... More
Presented by Dr. T. KOI on 27 Sep 2004 at 17:10
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
JASSimApp is joint project of SLAC, KEK, and Naruto University to create integrated GUI for Geant4, based on JAS3 framework, with ability to interactively: - Edit Geant4 geometry, materials, and physics processes - Control Geant4 execution, local and remote: pass commands and receive output, control event loop - Access AIDA histograms defined in Geant4 - Show generated Gea ... More
Presented by V. SERBO on 30 Sep 2004 at 17:50
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
JIM (Job and Information Management) is a grid extension to the mature data handling system called SAM (Sequential Access via Metadata) used by the CDF, DZero and Minos Experiments based at Fermilab. JIM uses a thin client to allow job submissions from any computer with Internet access, provided the user has a valid certificate or kerberos ticket. On completion the job output can be download ... More
Presented by M. BURGON-LYON on 29 Sep 2004 at 10:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
A general overview of the Jefferson Lab data acquisition run control system is presented. This run control system is designed to operate the configuration, control, and monitoring of all Jefferson Lab experiments. It controls data-taking activities by coordinating the operation of DAQ sub-systems, online software components and third-party software such as external slow control systems. The m ... More
Presented by V. GYURJYAN on 29 Sep 2004 at 16:30
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
In the context of Interactive Grid-Enabled Analysis Environment (GAE), physicists desire bi-directional interaction with the job they submitted. In one direction, monitoring information about the job and hence a “progress bar” should be provided to them. On other direction, physicist should be able to control their jobs. Before submission, they may direct the job to some specified re ... More
Presented by A. ANJUM on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Grid is emerging as a great computational resource but its dynamic behaviour makes the Grid environment unpredictable. System failure or network failure can occur or the system performance can degrade. So once the job has been submitted monitoring becomes very essential for user to ensure that the job is completed in an efficient way. In current environments once user submits a job he lose ... More
Presented by A. ANJUM on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The PHENIX collaboration records large volumes of data for each experimental run (now about 1/4 PB/year). Efficient and timely analysis of this data can benefit from a framework for distributed analysis via a growing number of remote computing facilities in the collaboration. The grid architecture has been, or is being deployed at most of these facilities. The experience being obtained ... More
Presented by A. SHEVEL on 27 Sep 2004 at 18:10
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
In a wide-area distributed and heterogeneous grid environment, monitoring represents an important and crucial task. It includes system status checking, performance tuning, bottlenecks detecting, troubleshooting, fault notifying. In particular a good monitoring infrastructure must provide the information to track down the current status of a job in order to locate any problems. Job monitoring ... More
Presented by G. DONVITO, G. TORTONE on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The infn.it AFS cell has been providing a useful single file-space and authentication mechanism for the whole INFN, but the lack of a distributed management system, has lead several INFN sections and LABs to setup local AFS cells. The hierarchical transitive cross-realm authentication introduced in the Kerberos 5 protocol and the new versions of the OpenAFS and MIT implementation of Kerberos ... More
Presented by E.M.V. FASANELLI on 29 Sep 2004 at 10:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The Conditions Database project has been launched to implement a common persistency solution for experiment conditions data in the context of the LHC Computing Grid (LCG) Persistency Framework. Conditions data, such as calibration, alignment or slow control data, are non-event experiment data characterized by the fact that they vary in time and may have different versions. The LCG project ... More
Presented by A. VALASSI on 29 Sep 2004 at 18:10
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
In the framework of the LCG Simulation Project, we present the Generator Services Sub-project, launched in 2003 under the oversight of the LHC Monte Carlo steering group (MC4LHC). The goal of the Generator Services Subproject is to guarantee the physics generator support for the LHC experiments. Work is divided into four work packages: Generator library; Storage, event interfaces and particle ... More
Presented by Dr. P. BARTALINI on 27 Sep 2004 at 14:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
LCIO is a persistency framework and data model for the next linear collider. Its original implementation, as presented at CHEP 2003, was focused on simulation studies. Since then the data model has been extended to also incorporate prototype test beam data, reconstruction and analysis. The design of the interface has also been simplified. LCIO defines a common abstract user interface ... More
Presented by F. GAEDE on 29 Sep 2004 at 14:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
In this paper we present an overview of the implementation of the LCG interface for the ATLAS production system. In order to take profit of the features provided by DataGRID software, on which LCG is based, we implemented a Python module, seamless integrated into the Workload Management System, which can be used as an object-oriented API to the submission services. On top of it we impl ... More
Presented by D. REBATTO on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Experiments frequently produce many small data files for reasons beyond their control, such as output splitting into physics data streams, parallel processing on large farms, database technology incapable of concurrent writes into a single file, and constraints from running farms reliably. Resulting data file size is often far from ideal for network transfer and mass storage performance. Pro ... More
Presented by L. TUURA on 29 Sep 2004 at 10:00
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
Advanced optical-based networks have the capacity and capability to meet the extremely large data movement requirements of particle physics collaborations. To date, research efforts in the advanced network area have been primarily been focused on provisioning, dynamically configuring, and monitoring the wide area optical network infrastructure itself. Application use of these facilities ... More
Presented by P. DEMAR on 30 Sep 2004 at 16:50
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
As part of the DOE SciDAC "National Infrastructure for Lattice Gauge Computing" project, Fermilab builds and operates production clusters for lattice QCD simulations. We currently operate three clusters: a 128-node dual Xeon Myrinet cluster, a 128-node Pentium 4E Myrinet cluster, and a 32-node dual Xeon Infiniband cluster. We will discuss the operation of these systems and examine their per ... More
Presented by Don PETRAVICK on 27 Sep 2004 at 14:40
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The lattice gauge theory community produces large volumes of data. Because the data produced by completed computations form the basis for future work, the maintenance of archives of existing data and metadata describing the provenance, generation parameters, and derived characteristics of that data is essential not only as a reference, but also as a basis for future work. Development of these ... More
Presented by E. NEILSEN on 29 Sep 2004 at 16:50
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
The CLEO collaboration at the Cornell electron positron storage ring CESR has completed its transition to the CLEO-c experiment. This new program contains a wide array of Physics studies of $e^+e^-$ collisions at center of mass energies between 3 GeV and 5 GeV. New challenges await the CLEO-c Online computing system, as the trigger rates are expected to rise from < 100 Hz to around 300 ... More
Presented by H. SCHWARTHOFF on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The design and optimization of the Computing Models for the future LHC experiments, based on the Grid technologies, requires a realistic and effective modeling and simulation of the data access patterns, the data flow across the local and wide area networks, and the scheduling and workflow created by many concurrent, data intensive jobs on large scale distributed systems. This paper present ... More
Presented by I. LEGRAND on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The LHC needs to achieve reliable high performance access to vastly distributed storage resources across the network. USCMS has worked with Fermilab-CD and DESY-IT on a storage service that was deployed at several sites. It provides Grid access to heterogeneous mass storage systems and synchronization between them. It increases resiliency by insulating clients from storage and network failu ... More
Presented by M. ERNST on 27 Sep 2004 at 14:20
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
The Product Support (PS) group of the IT department at CERN distributes and supports more than one hundred different software packages, ranging from tools for computer aided design, field calculations, mathematical and structural analysis to software development. Most of these tools, which are used on a variety of Unix and Windows platforms by different user populations, are commercial p ... More
Presented by N. HOEIMYR on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
The External Software Service of the LCG SPI project provides open source and public domain packages required by the LCG projects and experiments. Presently, more than 50 libraries and tools are provided for a set of platforms decided by the architect forum. All packages are installed following a standard procedure and are documented on the web. A set of scripts has been developed to e ... More
Presented by E. POINSIGNON on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The CMS Geant4-based Simulation Framework, Mantis, is a specialization of the COBRA framework, which implements the CMS OO architecture. Mantis, which is the basis for the CMS-specific simulation program OSCAR, provides the infrastructure for the selection, configuration and tuning of all essential simulation elements: geometry construction, sensitive detector and magnetic field management, ev ... More
Presented by M. STAVRIANAKOU on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The University of Edinburgh has an significant interest in mass storage systems as it is one of the core groups tasked with the roll out of storage software for the UK's particle physics grid, GridPP. We present the results of a development project to provide software interfaces between the SDSC Storage Resource Broker, the EU DataGrid and the Storage Resource Manager. This project was undert ... More
Presented by S. THORN on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 7 - Wide Area Networking
Network flow data gathered on border routers and core network switch/routers is used at Fermilab for statistical analysis of traffic patterns, passive network monitoring, and estimation of network performance characteristics. Flow data is also a critical tool in the investigation of computer security incidents. Development and enhancement of flow- based tools is on-going effort. The current s ... More
Presented by A. BOBYSHEV on 28 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The aim of the EGEE (Enabling Grids for E-Science in Europe) is to create a reliable and dependable European Grid infrastructure for e-Science. The objective of the Middleware Re-engineering and Integration Research Activity is to provide robust middleware components, deployable on several platforms and operating systems, corresponding to the core Grid services for resource access, data manag ... More
Presented by E. LAURE on 29 Sep 2004 at 14:20
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
To benefit from substantial advancements in Open Source database technology and ease deployment and development concerns with Objectivity/DB, the Phenix experiment at RHIC is migrating its principal databases from Objectivity to a relational database management system (RDBMS). The challenge of designing a relational DB schema to store a wide variety of calibration classes was solved b ... More
Presented by I. SOURIKOVA on 27 Sep 2004 at 16:30
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
There have been a number of efforts to develop use cases for the Grid to guide development and useability testing. This talk examines the value of "mis-use cases" for guiding the development of operational controls and error handling. A couple of the more common current network attack patterns will be extrapolated to a global Grid environment. The talk will walk through the various activitie ... More
Presented by D. SKOW on 29 Sep 2004 at 14:20
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The MonALISA (MONitoring Agents in A Large Integrated Services Architecture) system is a scalable Dynamic Distributed Services Architecture which is based on the mobile code paradigm. An essential part of managing a global system, like the Grids, is a monitoring system that is able to monitor and track the many site facilities, networks, and all the task in progress, in real time. MonALISA ... More
Presented by I. LEGRAND on 30 Sep 2004 at 16:30
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The complexity of the CMS Tracker (more than 50 million channels to monitor) now in construction in ten laboratories worldwide with hundreds of interested people , will require new tools for monitoring both the hardware and the software. In our approach we use both visualization tools and Grid services to make this monitoring possible. The use of visualization enables us to represent in a ... More
Presented by G. ZITO on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Fermilab operates a petabyte scale storage system, Enstore, which is the primary data store for experiments' large data sets. The Enstore system regularly transfers greater than 15 Terabytes of data each day. It is designed using a client-server architecture providing sufficient modularity to allow easy addition and replacement of hardware and software components. Monitoring of this system ... More
Presented by E. BERMAN on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
CDF is deploying a version of its analysis facility (CAF) at several globally distributed sites. On top of the hardware at each of these sites is either an FBSNG or Condor batch manager and a SAM data handling system which in some cases also makes use of dCache. The jobs which run at these sites also make use of a central database located at Fermilab. Each of these systems has its own mon ... More
Presented by I. SFILIGOI on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
We discuss techniques used to access legacy event generators from modern simulation environments. Examples will be given of our experience within the linear collider community accessing various FORTRAN-based generators from within a Java environment. Coding to a standard interface and use of shared object libraries enables runtime selection of generators, and allows for extension of the su ... More
Presented by N. GRAF on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent develop ... More
Presented by D. SANDERS on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
At LHC the 40 MHz bunch crossing rate dictates a high selectivity of the ATLAS Trigger system, which has to keep the full physics potential of the experiment in spite of a limited storage capability. The level-1 trigger, implemented in a custom hardware, will reduce the initial rate to 75 kHz and is followed by the software based level-2 and Event Filter, usually referred as High Level Tri ... More
Presented by Dr. M. BIGLIETTI on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The CMS detector has a sophisticated four-station muon system made up of tracking chambers (Drift Tubes, Cathode Strip Chambers) and dedicated trigger chambers. A muon reconstruction software based on Kalman filter techniques has been developed which reconstructs muons in the standalone muon system, using information from all three types of muon detectors, and links the resulting muon tracks ... More
Presented by N. NEUMEISTER on 30 Sep 2004 at 14:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The Architectural Principles of the Internet have dominated the past decade. Orthogonal to the telecommunications industry principles, they dramatically changed the networking landscape because they relied on iconoclastic ideas. First, the Internet end-to-end principle, which stipulates that the network should intervene minimally on the end-to-end traffic, pushing the complexity to the end ... More
Presented by F. FLUCKIGER on 30 Sep 2004 at 12:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
Management of large site network such as FNAL LAN presents many technical and organizational challenges. This highly dynamic network consists of around 10 thousand network nodes. The nature of the activities FNAL is involved in and its computing policy require that the network remains as open as reasonably possible both in terms of connectivity to the outside networks and in with respect to ... More
Presented by P. DEMAR on 27 Sep 2004 at 16:30
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
Wide area networks of sufficient, and rapidly increasing end-to-end capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of more than 1000 over the past decade, and the outlook is for a similar increase over the next decade ... More
Presented by H. NEWMAN on 30 Sep 2004 at 14:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
At CHEP03 we introduced "Physics Analysis eXpert" (PAX), a C++ toolkit for advanced physics analyses in High Energy Physics (HEP) experiments. PAX introduces a new level of abstraction beyond detector reconstruction and provides a general, persistent container model for HEP events. Physics objects like fourvectors, vertices and collisions can easiliy be stored, accessed and manipulated. Bookk ... More
Presented by A. SCHMIDT on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity now exceeds 800pb- 1. These numbers correspond to more than 1PB of raw and processed data stored on tape and an accumulation of the raw data at the rate of 1TB/day. To meet these storage demands, a new cost effective, compact hierarchical mass storage system has been co ... More
Presented by N. KATAYAMA on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity has exceeded 800pb-1. This requires more efficient and reliable way of event processing. To meet this requirement, new offline processing scheme has been constructed, based upon technique employed for the Belle online reconstruction farm. Event processing is performed at ... More
Presented by I. ADACHI on 29 Sep 2004 at 10:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The Alice High Level Trigger (HLT) is foreseen to consist of a cluster of 400 to 500 dual SMP PCs at the start-up of the experiment. It's input data rate can be up to 25GB/s. This has to be reduced to at most 1.2 GB/s before the data is sent to DAQ through event selection, filtering, and data compression. For these processing purposes, the data is passed through the cluster in several ... More
Presented by T.M. STEINBECK on 27 Sep 2004 at 14:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
Twisted trapezoids are important compontents in the LAr end cap calorimeter of the Atlas detector. A similar solid, the so-called twisted tubs consists of two end planes, inner and outer hyperboloidal surfaces, and twisted surfaces, and is an indispensable component for cylindrical drift chambers (see K. Hoshina et al, Computer Physics Communications 153 (2003) 373-391). In Geant3 exists ... More
Presented by O. LINK on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
A full slice of the barrel detector of the ATLAS experiment at the LHC is being tested this year with beams of pions, muons, electrons and photons in the energy range 1-300 GeV in the H8 area of the CERN SPS. It is a challenging exercise since, for the first time, the complete software suite developed for the full ATLAS experiment has been extended for use with real detector data, including ... More
Presented by A. FARILLA on 30 Sep 2004 at 16:30
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
While there are differences among the LHC experiments in their views of the role of databases and their deployment, there is relatively widespread agreement on a number of principles: 1. Physics codes will need access to database-resident data. The need for database access is not confined to middleware and services: physics-related data will reside in databases. 2. Database-resi ... More
Presented by Dirk DUELLMANN on 27 Sep 2004 at 15:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The scope of this work is the study of scalability limits of the Certification Authority (CA), running for large scale GRID environments. The operation of Certification Authority is analyzed from the view of the rate of incoming requests, complexity of authentication procedures, LCG security restrictions and other limiting factors. It is shown, that standard CA operational model ... More
Presented by E. BERDNIKOV on 29 Sep 2004 at 10:00
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
The problem of finding the best match between jobs and computing resources is critical for an efficient work load distribution in Grids. Very often jobs are preferably run on the Computing Elements (CEs) that can retrieve a copy of the input files from a local Storage Element (SE). This requires that multiple file copies are generated and managed by a data replication system. We propos ... More
Presented by E. RONCHIERI on 30 Sep 2004 at 16:30
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
The PHENIX experiment consists of many different detectors and detector types, each one with its own needs concerning the monitoring of the data quality and the calibration. To ease the task for the shift crew to monitor the performance and status of each subsystem in PHENIX we developed a general client server based framework which delivers events at a rate in excess of 100Hz. This ... More
Presented by Martin PURSCHKE on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
OpenPAW is for people that definitively do not want to quit the PAW command prompt, but seek anyway an implementation based over more modern technologies. We shall present the OpenScientist/Lab/opaw program that offers a PAW command prompt by using the OpenScientist tools (then C++, Inventor for doing graphic, Rio for doing the IO, OnX for the GUI, etc...). The OpenScientist/Lab packa ... More
Presented by G B. BARRAND on 30 Sep 2004 at 10:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
We want to present the status of this project. After quickly remembering the basic choices around GUI, visualization and scriptingm we would like to develop what had been done in order to have an AIDA-3.2.1 complient systen, to visualize Geant4 data (G4Lab module), to visualize ROOT data (Mangrove module), to have an hippodraw module and what had been done in order to run on MacOSX by ... More
Presented by G B. BARRAND on 27 Sep 2004 at 16:30
Type: oral presentation Session: Plenary
Track: Plenary Sessions
In September 2003 the first LCG-1 service was put into production at most of the large Tier 1 sites and was quickly expanded up to 30 Tier 1 and Tier 2 sites by the end of the year. Several software upgrades were made and the LCG-2 service was put into production in time for the experiment data challenges that began in February 2004 and continued for several months. In particular LCG-2 i ... More
Presented by I. BIRD on 28 Sep 2004 at 09:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
This paper discusses the challenges in maintaining a stable Managed Storage Service for users built upon dynamic underlying disk and tape layers. Early in 2004 the tools and techniques used to manage disk, tape, and stage servers were refreshed in adopting the QUATTOR tool set. This has markedly increased the coherency and efficiency of the configuration of data servers. The LEMON moni ... More
Presented by T. SMITH on 29 Sep 2004 at 14:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
Bitmap indices have gained wide acceptance in data warehouse applications handling large amounts of read only data. High dimensional ad hoc queries can be efficiently performed by utilizing bitmap indices, especially if the queries cover only a subset of the attributes stored in the database. Such access patterns are common use in HEP analysis. Bitmap indices have been implemented by sev ... More
Presented by Vincenzo INNOCENTE on 30 Sep 2004 at 15:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
In large-scale Grids, the replication of files to different sites is an important data management mechanism which can reduce access latencies and give improved usage of resources such as network bandwidth, storage and computing power. In the search for an optimal data replication strategy, the Grid simulator OptorSim was developed as part of the European DataGrid project. Simulations of variou ... More
Presented by C. NICHOLSON on 29 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
We will summarize the recent and current activities of the Geant4 working group responsible of the standard package of electromagnetic physics. The major recent activities include an design iteration in energy loss and multiple scattering domain providing "process versus models" approach, and development of the following physics models: multiple scattering, ultra relativistic muon physic ... More
Presented by Prof. V. IVANTCHENKO on 27 Sep 2004 at 16:30
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The LCG POOL project is now entering the third year of active development. The basic functionality of the project is provided but some functional extensions will move into the POOL system this year. This presentation will give a summary of the main functionality provided by POOL, which used in physics productions today. We will then present the design and implementation of the main new inter ... More
Presented by D. DUELLMANN on 29 Sep 2004 at 14:20
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The POOL software package has been successfully integrated with the three large experiment software frameworks of ATLAS, CMS and LHCb. This presentation will summarise the experience gained during these integration efforts and will try to highlight the commonalities and the main differences between the integration approaches. In particular we’ll discuss the role of the POOL object cache, t ... More
Presented by Giacomo GOVI on 29 Sep 2004 at 14:40
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Panoramix is an event display for LHCb. LaJoconde is an interactive environment over DaVinci, the analysis software layer for LHCb. We shall present global technological choices behind these two softwares : GUI, graphic, scripting, plotting. We shall present the connection to the framework (Gaudi), how we can integrate other tools like hippodraw. We shall present the overall capabili ... More
Presented by Dr. G B. BARRAND on 30 Sep 2004 at 15:40
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
LHC experiments have large amounts of software to build. CMS has studied ways to shorten project build times using parallel and distributed builds as well as improved ways to decide what to rebuild. We have experimented with making idle desktop and server machines easily available as a virtual build cluster using distcc and zeroconf. We have also tested variations of ccache and more tradition ... More
Presented by S. SCHMID on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
We report the results of parallelization and tests of the Parton String Model event generator at the parallel cluster of St.Petersburg State University Telecommunication center. Two schemes of parallelization were studied. In the first approach master process coordinates work of slave processes, gathers and analyzes data. Results of MC calculations are saved in local files. Local files a ... More
Presented by S. NEMNYUGIN on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The report presents an analysis of the Alice Data Challenge 2004. This Data Challenge has been performed on two different distributed computing environments. The first one is the Alice Environment for distributed computing (AliEn) used standalone. Presently this environment allows ALICE physicists to obtain results on simulation, reconstruction and analysis of data in ESD format for AA a ... More
Presented by G. SHABRATOVA on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
FNAL has over 5000 PCs running either Linux or Windows software. Protecting these systems efficiently against the latest vulnerabilities that arise has prompted FNAL to take a more central approach to patching systems. We outline the lab support structure for each OS and how we have provided a central solution that works within existing support boundaries. The paper will cover how we ident ... More
Presented by J. SCHMIDT on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
A common task for a reconstruction/analysis system is to be able to output different sets of events to different permanent data stores (e.g. files). This allows multiple related logical jobs to be grouped into one process and run using the same input data (read from a permanent data store and/or created from an algorithm). In our system, physicists can specify multiple output 'paths', where ... More
Presented by C. JONES on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
PATRIOT is a project that aims to provide better predictions of physics events for the high-Pt physics program of Run2 at the Tevatron collider. Central to Patriot is an enstore or mass storage repository for files describing the high-Pt physics predictions. These are typically stored as StdHep files which can be handled by CDF and D0 and run through detector and triggering simulatio ... More
Presented by S. MRENNA on 29 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
With the development of Linux and improvement of PC's performance, PC cluster used as high performance computing system is becoming much popular. The performance of I/O subsystem and cluster file system is critical to a high performance computing system. In this work the basic characteristics of cluster file systems and their performance are reviewed. The performance of four distributed cl ... More
Presented by Y. CHENG on 27 Sep 2004 at 17:30
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The D0 experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed the capabilities of any one institution. Moreover, the widely scattered geographical distribution of collaborators poses further serious dif ... More
Presented by B. QUINN on 29 Sep 2004 at 10:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The ATLAS Trigger and DAQ system is designed to use the Region of Interest (RoI)mechanism to reduce the initial Level 1 trigger rate of 100 kHz down to about 3.3 kHz Event Building rate. The DataFlow component of the ATLAS TDAQ system is responsible for the reading of the detector specific electronics via 1600 point to point readout links, the collection and provision of RoI to the Level ... More
Presented by G. UNEL on 27 Sep 2004 at 18:10
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
This talk describes the various stages of ATLAS Data Challenge 2 (DC2) in what concerns usage of resources deployed via NorduGrid's Advanced Resource Connector (ARC). It also describes the integration of these resources with the ATLAS production system using the Dulcinea executor. ATLAS Data Challenge 2 (DC2), run in 2004, was designed to be a step forward in the distributed data processin ... More
on 29 Sep 2004 at 17:10
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
There are two kinds of analysis objects with respect to their persistent requirements: * Objects, which need direct access to the persistency service only for their IO operations (read/write/update/...): histograms, clouds, profiles, ... All Persistency requirements for those objects can be implemented by standard Transient-Persistent Separation techniques like JDO, Serialization, etc ... More
Presented by J. HRIVNAC on 30 Sep 2004 at 10:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The LHC Software will be confronted to unprecedented challenges as soon as the LHC will turn on. We summarize the main Software requirements coming from the LHC detectors, triggers and physics, and we discuss several examples of Software components developed by the experiments and the LCG project (simulation, reconstruction, etc.), their validation, and their adequacy for LHC phy ... More
Presented by Fabiola GIANOTTI on 30 Sep 2004 at 09:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
In the framework of the LCG Simulation Physics Validation Project, we present comparison studies between the GEANT4 and FLUKA shower packages and LHC sub-detector test-beam data. Emphasis is given to the response of LHC calorimeters to electrons, photons, muons and pions. Results of "simple-benchmark" studies, where the above simulation packages are compared to data from nuclear facilities ... More
Presented by Alberto RIBON on 27 Sep 2004 at 15:40
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The Pixel Detector is the innermost one in the tracking system of the Compact Muon Solenoid (CMS) experiment. It provides the most precise measurements not only supporting the full track reconstruction but also allowing the standalone reconstruction useful especially for the online event selection at High-Level Trigger (HLT). The performance of the Pixel Detector is given. The HLT algorith ... More
Presented by Dr. S. CUCCIARELLI on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
During the runtime of any experiment, a central monitoring system that detects problems as soon as they appear has an essential role. In a large experiment, like Atlas, the online data acquisition system is distributed across the nodes of large farms, each of them running several processes that analyse a fraction of the events. In this architecture, it is necessary to have a central process t ... More
Presented by P. CONDE MUINO on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
Linux operating system has become the platform of choice in the HEP community. However, the migration process from another operating system to Linux can be a tremendous effort for developers and system administrators. The ultimate goal of such a transition is to maximize agreement between the final results of identical calculations on the different platforms. Apart from the fine tuning of the ... More
Presented by V. KUZNETSOV on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
Our goal is two fold. On one hand we wanted to address the interest of CMS users to have LCG Physics analysis environment on Solaris. On the other hand we wanted to assess the difficulty of porting code written in Linux without particular attention to portability to other Unix implementations. Our initial assumption was that the difficulty would be manageable even for a very small team. This ... More
Presented by I. REGUERO, J A. LOPEZ-PEREZ on 30 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
Resource management and scheduling of distributed, data-driven applications in a Grid environment are challenging problems. Although significant results were achieved in the past few years, the development and the proper deployment of generic, reliable, standard components present issues that still need to be completely solved. Interested domains include workload management, resource discove ... More
Presented by M. SGARAVATTO on 30 Sep 2004 at 15:20
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Various experimental configurations - such as, for instance, some gaseous detectors, require a high precision simulation of electromagnetic physics processes, accounting not only for the primary interactions of particles with matter, but also capable of describing the secondary effects deriving from the de-excitation of atoms, where primary collisions may have created vacancies. The Gea ... More
Presented by M.G. PIA on 27 Sep 2004 at 16:50
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The Geant4 Toolkit provides an ample set of alternative and complementary physics models to handle the electromagnetic interactions of leptons, photons, charged hadrons and ions. Because of the critical role often played by simulation in the experimental design and physics analysis, an accurate validation of the physics models implemented in Geant4 is essential, down to the quantitative ... More
Presented by M.G. PIA on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Grid computing provides key infrastructure for distributed problem solving in dynamic virtual organizations. However, Grids are still the domain of a few highly trained programmers with expertise in networking, high-performance computing, and operating systems. One of the big issues in the full-scale usage of a grid is the matching of the resource requirements of a job submission to avai ... More
Presented by A. ANJUM on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
We describe the production experience gained from implementing and using exclusively the San Diego Super Computer Center developed Storage Resource Broker (SRB) to distribute the BaBar experiment's production event data stored in ROOT files from the experiment center at SLAC, California, USA to a Tier A computing center at ccinp3, Lyon France. In addition we outline how the system can be re ... More
Presented by A. HASAN on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
One of the goals of CMS Data Challenge in March-April 2004 (DC04) was to run reconstruction for sustained period at 25 Hz input rate with distribution of the produced data to CMS T1 centers for further analysis. The reconstruction was run at the T0 using CMS production software, of which the main components are RefDB (CMS Monte Carlo 'Reference Database' with Web interface) and McRunjob (a ... More
Presented by J. ANDREEVA on 27 Sep 2004 at 14:20
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
For The BaBar Computing Group BaBar has recently moved away from using Objectivity/DB for it's event store towards a ROOT-based event store. Data in the new format is produced at about 20 institutions worldwide as well as at SLAC. Among new challenges are the organization of data export from remote institutions, archival at SLAC and making the data visible to users for analysis and impo ... More
on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The STAR experiment utilizes two major computing facilities for its data processing needs - the RCF at Brookhaven and the PDSF at LBNL/NERSC. The sharing of data between these facilities utilizes data grid services for file replication, and the deployment of these services was accomplished in conjunction with the Particle Physics Data Grid (PPDG). For STAR's 2004 run it will be necessary ... More
Presented by E. HJORT on 27 Sep 2004 at 16:50
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The BaBar experiment has been taking data since 1999. In 2001 the computing group started to evaluate the possibility to evolve toward a distributed computing model in a Grid environment. In 2003, a new computing model, described in other talks, was implemented, and ROOT I/O is now being used as the Event Store. We implemented a system, based onthe LHC Computing Grid (LCG) tools, to submit ful ... More
Presented by D. ANDREOTTI on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality. Howeve ... More
Presented by W. LAVRIJSEN on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Bender, the Python based physics analysis application for LHCb combines the best features of underlying Gaudi C++ software architecture with the flexibility of Python scripting language and provides end-users with friendly physics analysis oriented environment. It is based in one hand, on the generic Python bindings for the Gaudi framework, called GaudiPython, and in the other hand on an effic ... More
Presented by Dr. P. MATO on 30 Sep 2004 at 17:10
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
Software Quality Assurance is an integral part of the software development process of the LCG Project and includes several activities such as automatic testing, test coverage reports, static software metrics reports, bug tracker, usage statistics and compliance to build, code and release policies. As a part of QA activity all levels of the sw-testing should be run as a ... More
Presented by M. GALLAS on 30 Sep 2004 at 17:50
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
The RDBC (ROOT DataBase Connectivity) library is a C++ implementation of the The Java Database Connectivity Application Programming Interface. It provides a DBMS-independent interface to relational databases from ROOT as well as a generic SQL database access framework. RDBC also extends the ROOT TSQL abstract interface. Currently it is used in two large experiments: - in Minos as inter ... More
Presented by V. ONUCHIN on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The ROOT geometry package is a tool designed for building, browsing, tracking and visualizing a detector geometry. The code is independent from other external MC for simulation, therefore it does not contain any constraints related to physics. However, the package defines a number of hooks for tracking, such as media, materials, magnetic field or track state flags, in order to allow inter ... More
on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
The GUI is a very important component of the ROOT framework. Its main purpose is to improve the usability and end-user perception. In this paper, we present two main projects in this direction: the ROOT graphics editor and the ROOT GUI builder. The ROOT graphics editor is a recent addition to the framework. It provides a state of the art and an intuitive way to create or edit objects in ... More
Presented by I. ANTCHEVA on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
In this paper we examine the performance of the raw Ethernet protocol in deterministic, low-cost, real-time communication. Very few applications have been reported until now, and they focus on the use of the TCP and UDP protocols, which however add a sensible overhead to the communication and reduce the useful bandwidth. We show how low-level Ethernet access can be used for peer-to-peer ... More
Presented by A. ELEUTERI on 28 Sep 2004 at 10:00
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
It is important that the total bandwidth of the multiple streams should not exceed the network bandwidth in order to achieve a stable network flow with high performance in high bandwidth-delay product networks. Software control of bandwidth for each stream sometimes exceed the specified bandwidth. We proposed the hardware control technique for total bandwidth of multiple streams with high ac ... More
Presented by Dr. Y. KODAMA on 30 Sep 2004 at 15:20
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
Since version 3.05/02, the ROOT I/O System has gone through significant enhancements. In particular, the STL container I/O has been upgraded to support splitting, reading without existing libraries and using directly from TTreeFormula (TTree queries). This upgrade to the I/O system is such that it can be easily extended (even by the users) to support the splitting and querying of almost ... More
Presented by P. CANAL on 29 Sep 2004 at 15:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
Since its introduction in 1999, CMT is now used as a production tool in many large software projects for physics research (ATLAS, LHCb, Virgo, Auger, Planck). Although its basic concepts remain unchanged since the beginning, proving their viability, it is still improving and increasing its coverage of the configuration management mechanisms. Two important evolutions have recently been introdu ... More
Presented by C. ARNAULT on 30 Sep 2004 at 10:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
Python is a flexible, powerful, high-level language with excellent interactive and introspective capabilities and a very clean syntax. As such it can be a very effective tool for driving physics analysis. Python is designed to be extensible in low-level C-like languages, and its use as a scientific steering language has become quite widespread. To this end, existing and custom-written ... More
Presented by W. LAVRIJSEN on 27 Sep 2004 at 15:00
Type: poster Session: Poster Session 1
Track: Track 7 - Wide Area Networking
The CLEO III data acquisition was from the beginning in the late 90's designed to allow remote operations and monitoring of the experiment. Since changes in the coordination and operation of the CLEO experiment two years ago enabled us to separate tasks of the shift crew into an operational and a physics task, existing remote capabilities have been revisited. In 2002/03 CLEO started to deplo ... More
on 28 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
The ATLAS experiment uses a tiered data Grid architecture that enables possibly overlapping subsets, or replicas, of the original set to be located across the ATLAS collaboration. The full set of experiment data is located at a single Tier 0 site, and then subsets of the data are located at national Tier 1 sites, smaller subsets at smaller regional Tier 2 sites, and so on. In order to underst ... More
on 30 Sep 2004 at 14:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The LHCb experiment performed its latest Data Challenge (DC) in May-July 2004. The main goal was to demonstrate the ability of the LHCb grid system to carry out massive production and efficient distributed analysis of the simulation data. The LHCb production system called DIRAC provided all the necessary services for the DC: Production and Bookkeeping Databases, File catalogs, Workload and ... More
Presented by J. CLOSIER on 29 Sep 2004 at 16:30
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
Rio (for ROOT IO) is a rewriting of the file IO system of ROOT. We shall present our strong motivations of doing this tedious work. We shall present the main choices done in the Rio implementation (then by opposition to what we don't like in ROOT). For example, we shall say why we believe that an IO package is not a drawing package (no TClass::Draw) ; why someone should use pure ... More
Presented by G B. BARRAND on 30 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The CMS 2004 Data Challenge (DC04) was devised to test several key aspects of the CMS Computing Model in three ways: by trying to sustain a 25 Hz reconstruction rate at the Tier-0; by distributing the reconstructed data to six Tier-1 Regional Centers (FNAL in US, FZK in Germany, Lyon in France, CNAF in Italy, PIC in Spain, RAL in UK) and handling catalogue issues; by redistributing data t ... More
on 29 Sep 2004 at 14:20
Type: oral presentation Session: Plenary
Track: Plenary Sessions
In support of the Tevatron physics program, the Run II experiments have developed computing models and hardware facilities to support data sets at the petabyte scale, currently corresponding to 500 pb-1 of data and over 2 years of production operations. The systems are complete from online data collection to user analysis, and make extensive use of central services and common solutions dev ... More
Presented by A. BOEHNLEIN on 27 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
SAMGrid is a globally distributed system for data handling and job management, developed at Fermilab for the D0 and CDF experiments in Run II. The Condor system is being developed at the University of Wisconsin for management of distributed resources, computational and otherwise. We briefly review the SAMGrid architecture and its interaction with Condor, which was presented earlier. We then p ... More
Presented by I. TEREKHOV on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
SAMGrid is the shared data handling framework of the two large Fermilab Run II collider experiments: DZero and CDF. In production since 1999 at D0, and since mid-2004 at CDF, the SAMGrid framework has been adapted over time to accommodate a variety of storage solutions and configurations, as well as the differing data processing models of these two experiments. This has been very successful f ... More
Presented by R. KENNEDY on 27 Sep 2004 at 18:10
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The SAMGrid team is in the process of implementing a monitoring and information service, which fulfills several important roles in the operation of the SAMGrid system, and will replace the first generation of monitoring tools in the current deployments. The first generation tools are in general based on text logfiles and represent solutions which are not scalable or maintainable. The role ... More
Presented by A. LYON on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
A grid consists of high-end computational, storage, and network resources that, while known a priori, are dynamic with respect to activity and availability. Efficient co-scheduling of requests to use grid resources must adapt to this dynamic environment while meeting administrative policies. We discusses the necessary requirements of such a scheduler and introduce a distributed framewo ... More
Presented by R. CAVANAUGH on 30 Sep 2004 at 14:40
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity now exceeds 800pb-1. These numbers correspond to more than 1PB of raw and processed data stored on tape and an accumulation of the raw data at the rate of 1TB/day. The processed, compactified data, together with Monte Carlo simulation data for the final physics analyses ... More
Presented by Y. IIDA on 29 Sep 2004 at 17:30
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Storage Resource Manager (SRM) and Grid File Access Library (GFAL) are GRID middleware components used for transparent access to Storage Elements. SRM provides a common interface (WEB service) to backend systems giving dynamic space allocation and file management. GFAL provides a mechanism whereby an application software can access a file at a site without having to know which transport mechan ... More
Presented by E. SLABOSPITSKAYA on 29 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
ScotGrid is a prototype regional computing centre formed as a collaboration between the universities of Durham, Edinburgh and Glasgow as part of the UK's national particle physics grid, GridPP. We outline the resources available at the three core sites and our optimisation efforts for our user communities. We discuss the work which has been conducted in extending the centre to embrace new pro ... More
Presented by S. THORN on 27 Sep 2004 at 15:00
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
In a resource-sharing environment on the grid both grid users and grid production managers call for security and data protection from unauthorized access. To secure data management several novel grid technologies were introduced in ATLAS data management. Our presentation will review new grid technologies introduced in HEP production environment for database access through the Grid Security In ... More
Presented by M. BRANCO on 29 Sep 2004 at 15:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Analyses in high-energy physics often involve the filling of large amounts of histograms from n-tuple like data structures, e.g. RooT trees. Even when using an object-oriented framework like RooT, a the user code often follows a functional programming approach, where booking, application of cuts, calculation of weights and histogrammed quantities and finally the filling of the histogram ... More
Presented by Dr. J. LIST on 30 Sep 2004 at 16:50
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
The clusters using DataGrid middleware are usually installed and managed by means of an "LCFG" server. Originally developed by the Univ. of Edinburgh and extended by DataGrid, this is a complex piece of software. It allows for automated installation and configuration of a complete grid site. However, installation of the "LCFG"-Server takes most of the time, thus hinder widespread use. ... More
Presented by A. GARCIA on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The ATLAS detector is a sophisticated multi-purpose detector with over 10 million electronics channels designed to study high-pT physics at LHC. Due to their high multiplicity, reaching almost hundred thousand particles per event, heavy ion collisions pose a formidable computational challenge. A set of tools have been created to realistically simulate and fully reconstruct the most diffi ... More
Presented by P. NEVSKI on 30 Sep 2004 at 10:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The Level 1 and High Level triggers for the LHCb experiment are software triggers which will be implemented on a farm of about 1800 CPUs, connected to the detector read-out system by a large Gigabit Ethernet LAN with a capacity of 8 Gigabyte/s and some 500 Gigabit Ethernet links. The architecture of the readout network must be designed to maximise data throughput, control data flow, all ... More
Presented by T. SHEARS on 29 Sep 2004 at 15:40
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
This paper discusses some key points in the organization of the HARP software. In particular it describes the configuration of the packages, data and code management, testing and release procedures. Development of the HARP software is based on incremental releases with strict respect of the design structure. This poses serious challenges to the software management, which has gone through e ... More
Presented by E. TCHERNIAEV on 30 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
CMS currently uses a number of tools to transfer data which, taken together, form the basis of a heterogenous datagrid. The range of tools used, and the directed, rather than optimised nature of CMS recent large scale data challenge required the creation of a simple infrastructure that allowed a range of tools to operate in a complementary way. The system created comprises a hierarchy o ... More
Presented by T. BARRASS on 29 Sep 2004 at 15:40
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
In the context of the SPI project in the LCG Application Area, a centralized s/w management infrastructure has been deployed. It comprises of a suite of scripts handling the building and validating of the releases of the various projects as well as providing a customized packaging of the released s/w. Emphasis was put on the flexibility of the packaging and distribution solution as it should ... More
Presented by A. PFEIFFER on 30 Sep 2004 at 17:30
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
Generic programming as exemplified by the C++ standard library makes use of functions or function objects (objects that accept function syntax) to specialize generic algorithms for particular uses. Such separation improves code reuse without sacrificing efficiency. We employed this same technique in our combinatoric engine: DChain. In DChain, physicists combine lists of child particles to ... More
Presented by C. JONES on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
In the context of the LHC Computing Grid (LCG) project, the Applications Area develops and maintains that part of the physics applications software and associated infrastructure that is shared among the LHC experiments. The Physicist Interface (PI) project of the LCG Application Area encompasses the interfaces and tools by which physicists will directly use the software. In collaboration wi ... More
Presented by A. PFEIFFER on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
ATLAS is a particle detector which will is being built at CERN in Geneva. The muon detection system is made up among other things, of 600 chambers measuring 2 to 6 m2 and 30 cm thick. The chambers' position must be known with an accuracy of +/- 30 m for translations and +/-100 rad for rotations for a range of +/- 5mm and +/-5mrad. In order to fulfill these requirements, we have designed ... More
Presented by V. GAUTARD on 28 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
Within a Grid the possibility of managing storage space is fundamental, in particular, before and during application execution. On the other hand, the increasing availability of highly performant computing resources raises the need for fast and efficient I/O operations and drives the development of parallel distributed file systems able to satisfy these needs granting access to distributed sto ... More
Presented by L. MAGNONI on 29 Sep 2004 at 17:50
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard allows independent institutions to implement their own SRMs, thus allowing for a uniform access to heterogeneous storage elements. S ... More
Presented by T. PERELMUTOV on 29 Sep 2004 at 17:10
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
Providing Grid applications with effective access to large volumes of data residing on a multitude of storage systems with very different characteristics prompted the introduction of storage resource managers (SRM). Their purpose is to provide consistent and efficient wide-area access to storage resources unconstrained by their particular implementation (tape, large disk arrays, dispersed ... More
Presented by Ofer RIND on 27 Sep 2004 at 17:10
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Tim SMITH on 1 Oct 2004 at 11:05
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Philippe CANAL on 1 Oct 2004 at 09:20
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Massimo LAMANNA on 1 Oct 2004 at 09:45
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Douglas OLSON on 1 Oct 2004 at 10:40
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Stephen GOWDY on 1 Oct 2004 at 08:55
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Dr. Pierre VANDE VYVRE on 1 Oct 2004 at 08:30
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Peter CLARKE on 1 Oct 2004 at 11:30
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The Parallel ROOT Facility, PROOF, enables a physicist to analyze and understand very large data sets on an interactive time scale. It makes use of the inherent parallelism in event data and implements an architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. Scaling to many hundreds of servers is essential to process tens or hundreds of gigaby ... More
Presented by M. BALLINTIJN on 30 Sep 2004 at 15:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
We described the process for handling software builds and realeases for the Workload Management package of the DataGrid project. The software development in the project was shared among nine contractual partners, in seven different countries, and was organized in work-packages covering different areas. In this paper, we discuss how a combination of Concurrent Version System, GNU autotools a ... More
Presented by E. RONCHIERI on 30 Sep 2004 at 17:10
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Computer simulations play a crucial role in both the design and operation of particle accelerators. General tools for modeling single-particle accelerator dynamics have been in wide use for many years. Multi-particle dynamics are much more computationally demanding than single-particle dynamics, requiring supercomputers or parallel clusters of PCs. Because of this, simulations of multi- pa ... More
Presented by Dr. P. SPENTZOURIS on 27 Sep 2004 at 18:10
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
We have measured the performance of data transfer between CERN and our laboratory, ICEPP, at the University of Tokyo in Japan. The ICEPP will be one of the so-called regional centers for handling the data from the ATLAS experiment which will start data taking in 2007. More than petabytes of data are expected to be generated from the experiment each year. It is therefore essential to achieve a ... More
Presented by Dr. J. TANAKA on 30 Sep 2004 at 15:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The athena software framework for event reconstruction in ATLAS will be employed to analyse the data from the 2004 combined test beam. In this combined test beam, a slice of the ATLAS detector is operated and read out under conditions similar to future LHC running, thus providing a test-bed for the complete reconstruction chain. First results for the ATLAS InnerDetector will be presented. ... More
Presented by W. LIEBIG on 30 Sep 2004 at 15:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The talk presents the experience gathered during the testbed administration (~100 PC and 15+ switches) for the ATLAS Experiment at CERN. It covers the techniques used to resolve the HW/SW conflicts, network related problems, automatic installation and configuration of the cluster nodes as well as system/service monitoring in the heterogeneous dynamically changing cluster ... More
Presented by M. ZUREK on 27 Sep 2004 at 17:30
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
To distribute computing for CDF (Collider Detector at Fermilab) a system managing local compute and storage resources is needed. For this purpose CDF will use the DCAF (Decentralized CDF Analysis Farms) system which is already at Fermilab. DCAF has to work with the data handling system SAM (Sequential Access to data via Metadata). However, both DCAF and SAM are mature systems which have no ... More
Presented by V. BARTSCH on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
During the first half of 2004 the ALICE experiment has performed a large distributed computing exercise with two major objectives: to test the ALICE computing model, included distributed analysis, and to provide data sample for a refinement of the ALICE Jet physics Monte-Carlo studies. Simulation reconstruction and analysis of several hundred thousand events were performed, using the heter ... More
Presented by A. PETERS on 29 Sep 2004 at 15:40
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The Experiment Control System (ECS) is the top level of control of the ALICE experiment. Running an experiment implies performing a set of activities on the online systems that control the operation of the detectors. In ALICE, online systems are the Trigger, the Detector Control Systems (DCS), the Data-Acquisition System (DAQ) and the High-Level Trigger (HLT). The ECS provides a framewor ... More
Presented by F. CARENA on 29 Sep 2004 at 16:50
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The ALICE experiment at LHC will implement a High Level Trigger System, where the information from all major detectors are combined, including the TPC, TRD, DIMUON, ITS etc. The largest computing challenge is imposed by the TPC, requiring realtime pattern recognition. The main task is to reconstruct the tracks in the TPC, and in a final stage combine the tracking information from all d ... More
Presented by M. RICHTER on 29 Sep 2004 at 14:20
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end prototypes al ... More
Presented by Julia ANDREEVA on 30 Sep 2004 at 14:40
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The ATLAS Computing Model is under continuous active development. Previous exercises focussed on the Tier-0/Tier-1 interactions, with an emphasis on the resource implications and only a high-level view of the data and workflow. The work presented here considerably revises the resource implications, and attempts to describe in some detail the data and control flow from the High Level Trig ... More
Presented by R. JONES on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
The 40 MHz collision rate at the LHC produces ~25 interactions per bunch crossing within the ATLAS detector, resulting in terabytes of data per second to be handled by the detector electronics and the trigger and DAQ system. A Level 1 trigger system based on custom designed and built electronics will reduce the event rate to 100 kHz. The DAQ system is responsible for the readout of the det ... More
Presented by G. UNEL on 28 Sep 2004 at 10:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The ALICE collaboration at the LHC is developing since 1998 an OO offline framework, written entirely in C++. In 2001 a GRID system (AliEn - ALICE Environment) has been added and successfully integrated with ROOT and the offline. The resulting combination allows ALICE to do most of the design of the detector and test the validity of its computing model by performing large scale Data Challeng ... More
Presented by F. CARMINATI on 27 Sep 2004 at 17:10
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The architecture and performance of the ZEUS Global Track Trigger (GTT) are described. Data from the ZEUS silicon Micro Vertex detector's HELIX readout chips, corresponding to 200k channels, are digitized by 3 crates of ADCs and PowerPC VME board computers push cluster data for second level trigger processing and strip data for event building via Fast and GigaEthernet network connections. A ... More
Presented by M. SUTTON on 27 Sep 2004 at 14:20
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
Athena is the Atlas Control Framework, based on the common Gaudi architecture, originally developed by LHCb. In 2004 two major production efforts, the Data Challenge 2 and the Combined Test-beam reconstruction and analysis were structured as Athena applications. To support the production work we have added new features to both Athena and Gaudi: an "Interval of Validity" service to manage time- ... More
Presented by P. CALAFIURA on 27 Sep 2004 at 16:50
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
We describe the philosophy and design of Atlantis, an event visualisation program for the ATLAS experiment at CERN. Written in Java, it employs the Swing API to provide an easily configurable Graphical User Interface. Atlantis implements a collection of intuitive, data-orientated 2D projections, which enable the user to quickly understand and visually investigate complete ATLAS events. Even ... More
Presented by J. DROHAN on 30 Sep 2004 at 15:20
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The new BaBar bookkeeping system comes with tools to directly support data analysis tasks. This Task Manager system acts as an interface between datasets defined in the bookkeeping system, which are used as input to analyzes, and the offline analysis framework. The Task Manager organizes the processing of the data by creating specific jobs to be either submitted to a batch system, or run in ... More
Presented by Douglas SMITH on 29 Sep 2004 at 10:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The grand goal in neuroscience research is to understand how the interplay of structural, chemical and electrical signals in nervous tissue gives rise to behavior. Experimental advances of the past decades have given the individual neuroscientist an increasingly powerful arsenal for obtaining data, from the level of molecules to nervous systems. Scientists have begun the arduous and chall ... More
Presented by M. ELLISMAN on 28 Sep 2004 at 11:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
Geant4 is a toolkit for the simulation of the passage of particles through matter. Amongst its applications are hadronic calorimeters of LHC detectors and simulation of radiation environments. For these types of simulation, a good description of secondaries generated by inelastic interactions of primary nucleons and pions is particularly important. The Geant4 Binary Cascade is a hybr ... More
Presented by Dr. G. FOLGER on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
We will describe the plans and objectives of the recently funded PPARC(UK) e-science project, the Combined E-Science Data Analysis Resource for High Energy Physics (CEDAR), which will combine the strengths of the well established and widely used HEPDATA library of HEP data and the innovative JETWEB Data/Monte Carlo comparison facility built on the HZTOOL package and which exploits developing g ... More
Presented by Dr. M. WHALLEY on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
US-CMS is building up expertise at regional centers in preparation for analysis of LHC data. The User Analysis Farm (UAF) is part of the Tier 1 facility at Fermilab. The UAF is being developed to support the efforts of the Fermilab LHC Physics Center (LPC) and to enableefficient analysis of CMS data in the US. The support, infrastructure, and services to enable a local analysis community at ... More
Presented by Ian FISK on 28 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
Clarens enables distributed, secure and high-performance access to the worldwide data storage, compute, and information Grids being constructed in anticipation of the needs of the Large Hadron Collider at CERN. We report on the rapid progress in the development of a second server implementation in the Java language, the evolution of a peer-to-peer network of Clarens servers, and general impro ... More
Presented by C. STEENBERG on 29 Sep 2004 at 14:40
Type: poster Session: Poster Session 1
Track: Track 5 - Distributed Computing Systems and Experiences
The CDF Analysis Facility (CAF) has been in use since April 2002 and has successfully served 100s of users on 1000s of CPUs. The original CAF used FBSNG as a batch manager. In the current trend toward multisite deployment, FBSNG was found to be a limiting factor, so the CAF has been reimplemented to use Condor instead. Condor is a more widely used batch system and is well integrated wit ... More
on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 1 - Online Computing
The ATLAS data acquisition system uses the database to describe configurations for different types of data taking runs and different sub-detectors. Such configurations are composed of complex data objects with many inter-relations. During the DAQ system initialisation phase the configurations database is simultaneously accessed by a large number of processes. It is also required that such pro ... More
Presented by I. SOLOVIEV on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The D0 experiment relies on large scale computing systems to achieve her physics goals. As the experiment lifetime spans, multiple generations of computing hardware, it is fundemental to make projective models in to use available resources to meet the anticipated needs. In addition, computing resources can be supplied as in-kind contributions by collaborating institutions and countries, ho ... More
Presented by A. BOEHNLEIN on 29 Sep 2004 at 10:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
S.Argiro`(1), A. Kopmann (2), O.Martineau (2), H.-J. Mathes (2) for the Pierre Auger Collaboration (1) INFN, Sezione Torino (2) Forschungszentrum Karlsruhe The Pierre Auger Observatory currently under construction in Argentina will investigate extensive air showers at energies above 10^18 eV. It consists of a ground array of 1600 Cherenkov water detectors and 24 fluorescence telescope ... More
Presented by H-J. MATHES on 27 Sep 2004 at 15:40
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The DZERO Level 3 Trigger and Data Aquisition (L3DAQ) system has been running continuously since Spring 2002. DZERO is loacated at one of the two interaction points in the Fermilab Tevatron Collider. The L3DAQ moves front-end readout data from VME crates to a trigger processor farm. It is built upon a Cisco 6509 Ethernet switch, standard PCs, and commodity VME single board computers. We will ... More
Presented by D CHAPIN on 27 Sep 2004 at 17:10
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The ATLAS Detector consists of several major subsytems: an inner detector composed of pixels, microstrip detectors and a transition radiation tracker; electromagnetic and hadronic calorimetry, and a muon spectrometer. Over the last year, these systems have been described in terms of a set of geometrical primitives known as GeoModel. Software components for detector description interpret struct ... More
Presented by Vakhtang TSULAIA on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
We describe our experience in building a cost efficient High Throughput Cluster (HTC) using commodity hardware and free software within a university environment. Our HTC has a modular system architecture and is designed to be upgradable. The current, second phase configuration, consists of 344 processors and 20 Tbyte of RAID storage. In order to rapidly install and upgrade software, we ha ... More
Presented by A. MARTIN on 28 Sep 2004 at 10:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Dr Sutherland will review the evolution of computing over the past decade, focusing particularly on the development of the database and middleware from client server to Internet computing. But what are the next steps from the perspective of a software company? Dr Sutherland will discuss the development of Grid as well as the future applications revolving around collaborative working, ... More
Presented by Andrew SUTHERLAND on 29 Sep 2004 at 09:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The global network is more than ever taking its role as the great "enabler" for many branches of science and research. Foremost amongst such science drivers is of course the LHC/LCG programme, although there are several other sectors with growing demands of the network. Common to all of these is the realisation that a straightforward over provisioned best efforts wide area IP service ... More
Presented by Peter CLARKE on 30 Sep 2004 at 11:30
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
OPERA is a massive lead/emulsion target for a long-baseline neutrino oscillation search. More then 90% of the useful experimental data in OPERA will be produced by the scanning of emulsion plates with the automatic microscopes. The main goal of the data processing in OPERA will be the search, analysis and identification of primary and secondary vertexes produced by neutrino in lead-emuls ... More
Presented by Dr. V. TIOUKOV on 30 Sep 2004 at 10:00
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The FreeHEP Java library contains a complete implementation of Root IO for Java. The library uses the "Streamer Info" embedded in files created by Root 3.x to dynamically create high performance Java proxies for Root objects, making it possible to read any Root file, including files with user defined objects. In this presentation we will discuss the status of this code, explain its imple ... More
Presented by T. JOHNSON on 29 Sep 2004 at 15:40
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
Any physicist who will analyse data from the LHC experiments will have to deal with data and computing resources which are distributed across multiple locations and with different access methods. GANGA helps the end user by tying in specifically to the solutions for a given experiment ranging from specification of data to retrieval and post-processing of produced output. For LHCb and ATLAS the ... More
on 30 Sep 2004 at 16:30
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The GeoModel toolkit is a library of geometrical primitives that can be used to describe detector geometries. The toolkit is designed as a data layer, and especially optimized in order to be able to describe large and complex detector systems with minimum memory consumption. Some of the techniques used to minimize the memory consumption are: shared instancing with reference counting, comp ... More
Presented by V. TSULAIA on 30 Sep 2004 at 14:40
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
The Pierre Auger Observatory consists of two sites with several semi-autonomous detection systems. Each component, and in some cases each event, provides a preferred coordinate system for simulation and analysis. To avoid a proliferation of coordinate systems in the offline software of the Pierre Auger Observatory, we have developed a geometry package that allows the treatment of fundamental ... More
Presented by L. NELLEN on 30 Sep 2004 at 10:00
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
We describe the GridSite authorization system, developed by GridPP and the EU DataGrid project for access control in High Energy Physics grid environments with distributed virtual organizations. This system provides a general toolkit of common functions, including the evaluation of access policies (in GACL or XACML), the manipulation of digital credentials (X.509, GSI Proxies or VOMS attribut ... More
Presented by A. MCNAB on 29 Sep 2004 at 15:20
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
We present the scheme in use for online high level filtering, event reconstruction and classification in the H1 experiment at HERA since 2001. The Data Flow framework ( presented at CHEP2001 ) will be reviewed. This is based on CORBA for all data transfer, multi-threaded C++ code to handle the data flow and synchronisation and fortran code for reconstruction and event selection. A control ... More
Presented by A. CAMPBELL on 29 Sep 2004 at 15:20
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The observation of Higgs bosons predicted in supersymmetric theories will be a challenging task for the CMS experiment at the LHC, in particular for its High Level trigger (HLT). A prototype of the High Level Trigger software to be used in the filter farm of the CMS experiment and for the filtering of monte carlo samples will be presented. The implemented prototype heavily uses recursive ... More
Presented by O. VAN DER AA on 29 Sep 2004 at 15:20
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The Global Technology Outlook (GTO) is IBM Research’s projection of the future for information technology (IT). The GTO identifies progress and trends in key indicators such as raw computing speed, bandwidth, storage, software technology, and business modeling. These new technologies have the potential to radically transform the performance and utility of tomorrow's information processing s ... More
Presented by Dave MCQUEENEY on 29 Sep 2004 at 12:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
The talk will cover briefly the current status of the LHC Computing Grid project and will discuss the main challenges facing us as we prepare for the startup of LHC.
Presented by Les ROBERTSON on 28 Sep 2004 at 08:30
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
A web portal has been developed, in the context of the LCG/SPI project, in order to coordinate workflow and manage information in large software projects. It is a development of the GNU Savannah package and offers a range of services to every hosted project: Bug / support / patch trackers, a simple task planning system, news threads, and a download area for software releases. Features and ... More
Presented by Y. PERRIN on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
AliEn (ALICE Environment) is a GRID middleware developed and used in the context of ALICE, the CERN LHC heavy-ion experiment. In order to run Data Challenges exploiting both AliEn “native” resources and any infrastructure based on EDG-derived middleware (such as the LCG and the Italian GRID.IT), an interface system was designed and implemented; some details of a prototype were already pr ... More
Presented by S. BAGNASCO on 29 Sep 2004 at 10:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The aim of the LHCb configuration database is to store all the controllable devices of the detector. The experiment’s control system (that uses PVSS) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to rapidly store and retrieve huge amounts o ... More
Presented by L. ABADIE on 29 Sep 2004 at 17:50
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
Network security at IHEP is becoming one of the most important issues of computing environment. To protect its computing and network resources against attacks and viruses from outside of the institute, security measures to combat these are implemented. To enforce security policy the network infrastructure was re-configured to one intranet and two DMZ areas. New rules to control the acces ... More
Presented by Mrs. L. MA on 30 Sep 2004 at 18:10
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
As the BaBar experiment shifted its computing model to a ROOT-based framework, we undertook the development of a high-performance file server as the basis for a fault-tolerant storage environment whose ultimate goal was to minimize job failures due to server failures. Capitalizing on our five years of experience with extending Objectivity's Advanced Multithreaded Server (AMS), elements were a ... More
Presented by A. HANUSHEVSKY on 27 Sep 2004 at 16:30
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The Pierre Auger Observatory is designed to unveil the nature and the origin of the highest energy cosmic rays. Two sites, one currently under construction in Argentina, and another pending in the Northern hemisphere, will observe extensive air showers using a hybrid detector comprising a ground array of 1600 water Cerenkov tanks overlooked by four atmospheric fluorescence detectors. Though ... More
Presented by L. NELLEN on 27 Sep 2004 at 18:10
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The U.S.LHC Tier-1 and Tier-2 laboratories and universities are developing production Grids to support LHC applications running across a worldwide Grid computing system. Together with partners in computer science, physics grid projects and running experiments, we will build a common national production grid infrastructure which is open in its architecture, implementation and use. The OSG ... More
Presented by R. PORDES on 27 Sep 2004 at 17:10
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
The PHENIX detector consists of 14 detector subsystems. It is designed such that individual subsystems can be read out independently in parallel as well as a single unit. The DAQ used to read the detector is a highly-pipelined parallel system. Because PHENIX is interested in rare physics events, the DAQ is required to have a fast trigger, deep buffering, and very high bandwidth. The PHEN ... More
Presented by D. WINTER on 27 Sep 2004 at 16:50
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
A central idea of Grid Computing is the virtualization of heterogeneous resources. To meet this challenge the Institute for Scientific Computing, IWR, has started the project CampusGrid. Its medium term goal is to provide a seamless IT environment supporting the on-site research activities in physics, bioinformatics, nanotechnology and meteorology. The environment will include all kind ... More
Presented by O. SCHNEIDER on 28 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
The ROOT linear algebra package has been invigorated . The hierarchical structure has been improved allowing different flavors of matrices, like dense and symmetric . A fairly complete set of matrix decompositions has been added to support matrix inversions and solving linear equations. The package has been extensively compared to other algorithms for its accuracy and performance. In th ... More
Presented by R. BRUN on 30 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes requi ... More
Presented by S. VESELI on 29 Sep 2004 at 18:10
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
The C++ programming language has very limited capabilities for reflection information about its objects. In this paper a new reflection system will be presented, which allows complete introspection of C++ objects and has been developed in the context of the CERN/LCG/SEAL project in collaboration with the ROOT project. The reflection system consists of two different parts. The first part is ... More
Presented by S. ROISER on 27 Sep 2004 at 14:40
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
This paper describes the component model that has been developed in the context of the LCG/SEAL project. This component model is an attempt to handle the increasing complexity in the current data processing applications of LHC experiments. In addition, it should facilitate the software re-use by the integration of software components from LCG and non-LCG into the experiment's applications. ... More
Presented by R. CHYTRACEK on 27 Sep 2004 at 14:20
Type: oral presentation Session: Distributed Computing Services
Track: Track 4 - Distributed Computing Services
While many success stories can be told as a product of the Grid middleware developments, most of the existing systems relying on workflow and job execution are based on integration of self-contained production systems interfacing with a given scheduling component or portal, or directly uses the base component of the Grid middleware (globus-job-run, globus-job-submit). However, such systems usu ... More
Presented by J. LAURET on 30 Sep 2004 at 14:20
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
The Tag Collector is a web interfaced database application for release management. The tool is tightly coupled to CVS, and also to CMT, the configuration management tool. Developers can interactively select the CVS tags to be included in a build, and the complete build commands are produced automatically. Other features are provided such as verification of package CMT requirements files, and d ... More
Presented by S. ALBRAND on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The ATLAS reconstruction software requires extrapolation to arbitrary oriented surfaces of different types inside a non-uniform magnetic field. In addition multiple scattering and energy loss effects along the propagated trajectories have to be taken into account. A good performace in respect of computing time consumption is crucial due to hit and track multiplicity in high luminosity events a ... More
Presented by A. SALZBURGER on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
In order for physicist to easily benefit from the different existing geometry tools used within the community, the Virtual Geometry Model (VGM) has been designed. In the VGM we introduce the abstract interfaces to geometry objects and an abstract factory for geometry construction, import and export. The interfaces to geometry objects were defined to be suitable to describe "geant-like" geom ... More
Presented by I. HRIVNACOVA on 30 Sep 2004 at 14:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The current major detector simulation programs, i.e. GEANT3, GEANT4 and FLUKA have largely incompatible environments. This forces the physicists willing to make comparisons between the different transport Monte Carlos to develop entirely different programs. Moreover, migration from one program to the other is usually very expensive, in manpower and time, for an experiment offline envir ... More
Presented by A. GHEATA on 29 Sep 2004 at 14:40
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
Current grid development projects are being designed such that they require end users to be authenticated under the auspices of a "recognized" organization, called a Virtual Organization (VO). A VO must establish resource-usage agreements with grid resource providers. The VO is responsible for authorizing its members for grid computing privileges. The individual sites and resources typic ... More
Presented by Ian FISK on 29 Sep 2004 at 17:30
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
A common LCG architecture for the Conditions Database for the time evolving data enables the possibility to separate the interval-of- validity (IOV) information from the conditions data payload. The two approaches can be beneficial in different cases and separation presents challenges for efficient knowledge discovery, navigation and data visualization. In our paper we describe the conditions ... More
Presented by D. KLOSE on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The design, implementation and performance of the ZEUS Global Tracking Trigger (GTT) Forward Algorithm is described. The ZEUS GTT Forward Algorithm integrates track information from the ZEUS Micro Vertex Detector (MVD) and forward Straw Tube Tracker (STT) to provide a picture of the event topology in the forward direction ($1.5<\eta <3$ ) of the ZEUS detector. This region is particul ... More
Presented by Dimitri GLADKOV on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The current design, implementation and performance of the ZEUS global tracking trigger barrel algorithm are described. The ZEUS global tracking trigger integrates track information from the ZEUS central tracking chamber (CTD) and micro vertex detector (MVD) to obtain a global picture of the track topology in the ZEUS detector at the second level trigger stage. Algorithm processing is perfor ... More
Presented by M. SUTTON on 30 Sep 2004 at 14:20
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
A new object-oriented Minimization package is available via the ZOOM cvs repository. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little mainte ... More
Presented by M. FISCHLER on 30 Sep 2004 at 15:40
Type: poster Session: Poster Session 3
Track: Track 1 - Online Computing
This article describes the simulation of the read-out subsystem which will be subject to the BESIII data acquisition system. According to the purpose of the BESIII, the event rate will be about 4000Hz, and the data rate up to 50Mbytes/sec after Level 1 trigger. The read-out subsystem consists of some read-out crates and read-out computer whose principle function is to collect event data ... More
Presented by Mei YE on 30 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
AliEn (ALICE Environment) is a Grid framework developed by the Alice Collaboration and used in production for almost 3 years. From the beginning, the system was constructed using Web Services and standard network protocols and Open Source components. The main thrust of the development was on the design and implementation of an open and modular architecture. A large part of the component cam ... More
Presented by P. BUNCIC on 27 Sep 2004 at 15:20
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
One of the most important problems in software management of a very large and complex project such as Atlas is how to deploy the software on the running sites. By running sites we include computer sites ranging from computing centers in the usual sense down to individual laptops but also the computer elements of a computing grid organization. The deployment activity consists in constructing a ... More
Presented by C. ARNAULT on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The Event Reconstruction Control System of the BaBar experiment was redesigned in 2002, to satisfy the following major requirements: flexibility and scalability. Because of its very nature, this system is continuously maintained to implement the changing policies, typical of a complex, distributed production enviromnent. In 2003, a major revolution in the BaBar computing model, the Computing ... More
Presented by A. CESERACCIU on 27 Sep 2004 at 14:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Just as the development of the World Wide Web has had its greatest impact outside particle physics, so it will be with the development of the Grid. E-science, of which the Grid is just a small part, is already making a big impact upon many scientific disciplines, and facilitating new scientific discoveries that would be difficult to achieve in any other way. Key to this is the definitio ... More
Presented by Ken PEACH on 28 Sep 2004 at 12:00
Type: oral presentation Session: Online Computing
Track: Track 1 - Online Computing
BES is an experiment on Beijing Electron-Positron Collider (BEPC). BES computing environment consists of PC/Linux cluster and mainly relies on the free software. OpenPBS and Ganglia are used as job schedule and monitor system. With helps from CERN IT Division, CASTOR was implemented as storage management system. BEPC is being upgraded and luminosity will increase one hundred times comparin ... More
Presented by G. CHEN on 27 Sep 2004 at 15:20
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
This talk will describe the new analysis computing model deployed by BaBar over the past year. The new model was designed to better support the current and future needs of physicists analyzing data, and to improve BaBar's analysis computing efficiency. The use of RootIO in the new model is described in other talks. Babar's new analysis data content format contains both high and low level ... More
Presented by D. BROWN on 30 Sep 2004 at 15:40
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
This paper presents an overview of the legacy interface provided for the ATLAS DC2 production system. The term legacy refers to any non-grid system which may be deployed for use within DC2. The reasoning behind providing such a service for DC2 is twofold in nature. Firstly, the legacy interface provides a backup solution should unforeseen problems occur while developing the grid based interf ... More
Presented by J. KENNEDY on 29 Sep 2004 at 10:00
Type: oral presentation Session: Plenary
Track: Plenary Sessions
In the 18 months since the CHEP03 meeting in San Diego, the HEP community deployed the current generation of grid technologies in a veracity of settings. Legacy software as well as recently developed applications was interfaced with middleware tools to deliver end-to-end capabilities to HEP experiments in different stages of their life cycles. In a series of data challenges, reprocessing ... More
Presented by Miron LIVNY on 29 Sep 2004 at 08:30
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The simulation for the ATLAS experiment is presently operational in a full OO environment and it is presented here in terms of successful solutions to problems dealing with application in a wide community using a common framework. The ATLAS experiment is the perfect scenario where to test all applications able to satisfy the different needs of a big community. Following a well stated strategy ... More
Presented by Prof. A. RIMOLDI on 29 Sep 2004 at 14:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
Fermilab has developed and successively uses Enstore Data Storage System. It is a primary data store for the Run II Collider Experiments, as well as for the others. It provides data storage in robotic tape libraries according to requirements of the experiments. High fault tolerance and availability, as well as multilevel priority based request processing allows experiments to effectively sto ... More
Presented by A. MOIBENKO on 29 Sep 2004 at 14:20
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
During the CMS Data Challenge 2004 a realtime analysis was attempted at INFN and PIC Tier-1 and Tier-2s in order to test the ability of the instrumented methods to quickly process the data. Several agents and automatic procedures were implemented to perform the analysis at the Tier-1/2 synchronously with the data transfer from Tier-0 at CERN. The system was implemented in the Grid LCG-2 ... More
Presented by N. DE FILIPPIS on 30 Sep 2004 at 16:50
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
The Fermilab CDF Run-II experiment is now providing official support for remote computing, expanding this to about 1/4 of the total CDF computing during the Summer of 2004. I will discuss in detail the extensions to CDF software distribution and configuration tools and procedures, in support of CDF GRID/DCAF computing for Summer 2004. We face the challenge of unreliable networks, time diff ... More
Presented by A. KREYMER on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
In the High Energy Physics (HEP) community, Grid technologies have been accepted as solutions to the distributed computing problem. Several Grid projects have provided software in the last years. Among of all them, the LCG - especially aimed at HEP applications - provides a set of services and respective client interfaces, both in the form of command line tools as well as programming lang ... More
on 29 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
Tracks finding and fitting algorithm in ALICE Time projection chamber (TPC) and Inner Tracking System (ITS) based on the Kalman-filtering are presented. The filtering algorithm is able to cope with non-Gaussian noise and ambiguous measurements in high-density environments. The tracking algorithm consists of two parts: one for the TPC and one for the prolongation into the ITS. The occupancy i ... More
Presented by Mr. M. IVANOV on 30 Sep 2004 at 15:20
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
Long lived charged hyperon, $\Xi$ and $\Omega$, are capable of travelling significant distances producing hits in the silicon detector, before decaying into $\Lambda^0 \pi$ and $\Lambda^0 K$ pairs, respectively. This gives unique opportunity of reconstructiong hyperon tracks. We have developed a dedicated "outside-in" tracking algorithm that is seeded by 4-momentum and decay vertex of the lo ... More
Presented by Dr. E. GERCHTEIN on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 3 - Core Software
It is essential to provide users transparent access to time varying data, such as detector misalignments, calibration parameters and the like. This data should be automatically updated, without user intervention, whenever it changes. Furthermore, the user should be able to be notified whenever a particular datum is updated, so as to perform actions such as re-caching of compound results, or p ... More
Presented by C. LEGGETT on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactio ... More
Presented by L. PINSKY on 27 Sep 2004 at 15:00
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
Breast cancer screening programs require managing and accessing a huge amount of data, intrinsically distributed, as they are collected in different Hospitals. The development of an application based on Computer Assisted Detection algorithms for the analysis of digitised mammograms in a distributed environment is a typical GRID use case. In particular, AliEn (ALICE Environment) services, ... More
Presented by P. CERELLO on 29 Sep 2004 at 10:00
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
The Nordic Grid facility (NorduGrid) came into production operation during the summer of 2002 when the Scandinavian Atlas HEP group started to use the Grid for the Atlas Data Challenges and was thus the first Grid ever contributing to an Atlas production. Since then, the Grid facility has been in continuous 24/7 operation offering an increasing number of resources to a growing set of active u ... More
Presented by O. SMIRNOVA on 29 Sep 2004 at 10:00
Type: oral presentation Session: Distributed Computing Systems and Experiences
Track: Track 5 - Distributed Computing Systems and Experiences
The University of Wisconsin distributed computing research groups developed a software system called Condor for high throughput computing using commodity hardware. An adaptation of this software, Condor-G, is part of Globus grid computing toolkit. However, original Condor has additional features that allows building of an enterprise level grid. Several UW departments have Condor computing poo ... More
Presented by S. DASU on 27 Sep 2004 at 17:30
Type: poster Session: Poster Session 1
Track: Track 6 - Computer Fabrics
Protein analysis, imaging, and DNA sequencing are some of the branches of biology where growth has been enabled by the availability of computational resources. With this growth, biologists face an associated need for reliable, flexible storage systems. For decades the HEP community has been driving the development of such storage systems to meet their own needs. Two of these systems - the ... More
Presented by Alan TACKETT on 28 Sep 2004 at 10:00
Type: oral presentation Session: Grid Security
Track: Track 4 - Distributed Computing Services
Implementing strategies for secured access to widely accessible clusters is a basic requirement of these services, in particular if GRID integration is sought for. This issue has two complementary lines to be considered: security perimeter and intrusion detection systems. In this paper we address aspects of the second one. Compared to classical intrusion detection mechanisms, close monitor ... More
Presented by M. CARDENAS MONTES on 29 Sep 2004 at 14:40
Type: poster Session: Poster Session 2
Track: Track 4 - Distributed Computing Services
Expansion of large computing fabrics/clusters throughout the world would create a need for stricter security. Otherwise any system could suffer damages such as data loss, data falsification or misuse. Perimeter security and intrusion detection system (IDS) are the two main aspects that must be taken into account in order to achieve system security. The main target of an intrusion detec ... More
Presented by E. PEREZ-CALLE on 29 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
We report on the software for Object-oriented Reconstruction for CMS Analysis, ORCA. It is based on the Coherent Object-oriented Base for Reconstruction, Analysis and simulation (COBRA) and used for digitization and reconstruction of simulated Monte-Carlo events as well as testbeam data. For the 2004 data challenge the functionality of the software has been extended to store collec ... More
Presented by Dr. S. WYNHOFF on 29 Sep 2004 at 17:30
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
Validation of hadronic physics processes of the Geant4 simulation toolkit is a very important task to ensure adequate physics results for the experiments being built at the Large Hadron Collider. We report on simulation results obtained using the Geant4 Bertini cascade double-differential production cross-sections for various target materials and incident hadron kinetic energies between 0.1-1 ... More
on 30 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
For physics analysis in ATLAS, reliable vertex finding and fitting algorithms are important. In the harsh enviroment of the LHC (~ 23 inelastic collissions every 25 ns) this task turns out to be particularily challenging. One of the guiding principles in developing the vertexing packages is a strong focus on modularity and defined interfaces using the advantages of object oriented C++. The b ... More
Presented by A. WILDAUER on 30 Sep 2004 at 17:30
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
Using the modern 3D visualization software and hardware to represent the object models of the HEP detectors would create the impressive pictures of events and the detail views of the detectors facilitating the design, simulation and data analysis and representation the huge amount of the information flooding the modern HEP experiments. In this paper we represent the work made by members of S ... More
Presented by Mr. A. KULIKOV on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 3
Track: Track 2 - Event processing
The simulation, reconstruction and analysis software access to the magnetic field has large impact both on CPU performance and on accuracy. An approach based on a volume geometry is described. The volumes are constructed in such a way that their boundaries correspond to field discontinuities, which are due to changes in magnetic permeability of the materials. The field in each volum ... More
Presented by T. TODOROV on 30 Sep 2004 at 10:00
Type: poster Session: Poster Session 1
Track: Track 7 - Wide Area Networking
The Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider (LHC) is scheduled to come on-line in 2007. Fermilab will act as the CMS Tier-1 center for the US and make experiment data available to more than 400 researchers in the US participating in the CMS experiment. The US CMS Users Facility group, based at Fermilab, has initiated a project to develop a model for optimizing m ... More
Presented by A. BOBYSHEV on 28 Sep 2004 at 10:00
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
WIRED 4 is a experiment independent event display plugin module for JAS 3 (Java Analysis Studio) generic analysis framework. Both WIRED and JAS are written in Java. WIRED, which uses HepRep (HEP Representables for Event Display) as its input format, supports viewing of events using either conventional 3D projections as well as specialized projections such as a fish-eye or a rho-Z projecti ... More
Presented by M. DONSZELMANN on 30 Sep 2004 at 16:30
Type: oral presentation Session: Plenary
Track: Plenary Sessions
Presented by Wolfgang VON RUEDEN on 27 Sep 2004 at 09:00
Type: oral presentation Session: Wide Area Networking
Track: Track 7 - Wide Area Networking
Large, distributed HEP collaborations, such as D0, CDF and US-CMS, depend on stable and robust network paths between major world research centers. The evolving emphasis on data and compute Grids increases the reliance on network performance. FermiLab's experimental groups and network support personnel identified a critical need for WAN monitoring to ensure the quality and efficient utiliza ... More
Presented by Mr. M. GRIGORIEV on 30 Sep 2004 at 15:40
Type: oral presentation Session: Event Processing
Track: Track 2 - Event processing
JAS3 is a general purpose, experiment independent, open-source, data analysis tool. JAS3 includes a variety of features, including histograming, plotting, fitting, data access, tuple analysis, spreadsheet and event display capabilities. More complex analysis can be performed using several scripting languages (pnuts, jython, etc.), or by writing Java analysis classes. All of these features ... More
Presented by Mark DONSZELMANN on 30 Sep 2004 at 18:10
Type: oral presentation Session: Core Software
Track: Track 3 - Core Software
Till now, ROOT objects can be stored only in a binary ROOT specific file format. Without the ROOT environment the data stored in such files are not directly accessible. Storing objects in XML format makes it easy to view and edit (with some restriction) the object data directly. It is also plausible to use XML as exchange format with other applications. Therefore XML streaming has been imp ... More
Presented by S. LINEV on 29 Sep 2004 at 15:20
Type: poster Session: Poster Session 2
Track: Track 5 - Distributed Computing Systems and Experiences
This paper describes XTNetFile, the client side of a project conceived to address the high demand data access needs of modern physics experiments such as BaBar using the ROOT framework. In this context, a highly scalable and fault tolerant client/server architecture for data access has been designed and deployed which allows thousands of batch jobs and interactive sessions to effective ... More
Presented by F. FURANO on 29 Sep 2004 at 10:00
Type: oral presentation Session: Computer Fabrics
Track: Track 6 - Computer Fabrics
The dCache software system has been designed to manage a huge amount of individual disk storage nodes and let them appear under a single file system root. Beside a variety of other features, it supports the GridFtp dialect, implements the Storage Resource Manager interface (SRM V1) and can be linked against the CERN GFAL software layer. These abilities makes dCache a perfect Storage Elemen ... More
Presented by P. FUHRMANN on 29 Sep 2004 at 16:50