# CHEP 06

13-17 February 2006
Tata Institute of Fundamental Research
Europe/Zurich timezone
Home > Contribution List
Displaying 441 contributions out of 441
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
HEP programs commonly have very flat execution profiles, implying that the execution ime is spread over many routines/methods. Consequently, compiler optimization should be applied to the whole program and not just a few inner loops. In this talk I, nevertheless, discuss the value of extracting some of the most solicited routines (relatively speaking) and using them to gauge overall performanc ... More
Presented by Mr. Sverre JARP on 13 Feb 2006 at 15:00
Type: oral presentation Track: Event processing applications
Over the past 3 years the ATLAS Inner Detector reconstruction software has undergone a major redesign based on the recommendations of an internal review in spring 2003. The new track reconstruction infrastructure is characterized by: - a standardized ATLAS geometry model - a common track reconstruction data model - a suite of common extrapolation, track fitting, vertexing and pattern recogni ... More
Presented by Markus ELSING
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
CRA is a multi layered system with a web based front end providing centralized management and rules enforcement in a complex, distributed computing environment such as Cern. Much like an orchestra conductor CRA’s role is essential and multi functional. Account management, resource usage and consistency controls for every central computing service at Cern with about 75000 active accounts is o ... More
Presented by Mr. Nick ZIOGAS, Mr. Wim VAN LEERSUM, Mr. Bartlomiej PAWLOWSKI on 15 Feb 2006 at 16:20
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
This paper discusses an architectural approach to enhance job scheduling in data intensive applications in HEP computing. First, a brief introduction to the current grid system based on LCG/gLite is given, current bottlenecks are identified and possible extensions to the system are described. We will propose an extended scheduling architecture, which adds a scheduling framework on top of e ... More
Presented by Mr. Lars SCHLEY on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
The INFN-GRID project allows experimenting and testing many different and innovative solutions in the GRID environment. In this research ad development it is important to find the most useful solutions for simplified the managment and access to the resources. In the VIRGO laboratory in Napoli we have tested a non standard implementation based on LCG 2.6.0 by using a diskless solution in or ... More
Presented by Dr. Silvio PARDI on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Distributed Event production and processing
The LHC experiments at CERN will collect data at a rate of several petabytes per year and produce several hundred files per second. Data has to be processed and transferred to many tier centres for distributed data analysis in different physics data formats increasing the amount of files to handle. All these files must be accounted for, reliably and securely tracked in a GRID environment, enab ... More
Presented by Andreas Joachim PETERS on 15 Feb 2006 at 09:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The ATLAS experiment will deploy an event-level metadata system as a key component of support for data discovery, identification, selection, and retrieval in its multi-petabyte event store. ATLAS plans to use the LCG POOL collection infrastructure to implement this system, which must satisfy a wide range of use cases and must be usable in a widely distributed environment. The system requires ... More
Presented by Dr. David MALON, Caitriana NICHOLSON, Caitriana NICHOLSON on 13 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Online Computing
Traditionally, in the pre-LHC muti-purpose high-energy experiements the diversification of their physics programs has been largely decoupled from the process of the data-taking - physics groups could only influence the selection criteria of recorded events according to predefined trigger menus. In particular, the physics-oriented choice of subdetector data and the implementation of refined eve ... More
Presented by Mieczyslaw KRASNY on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Event processing applications
The design of a general jet tagging algorithm for the ATLAS detector reconstruction software is presented. For many physics analyses, reliable and efficient flavour identification, 'tagging', of jets is vital in the process of reconstructing the physics content of the event. To allow for a broad range of identification methods emphasis is put on the flexibility of the framework. A guiding d ... More
Presented by Mr. Andreas WILDAUER on 13 Feb 2006 at 11:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
The Condor-G meta-scheduling system has been used to create a single Grid of GT2 resources from LCG and GridX1, and ARC resources from NorduGrid. Condor-G provides the submission interfaces to GT2 and ARC gatekeepers, enabling transparent submission via the scheduler. Resource status from the native information systems is converted to the Condor ClassAd format and used for matchmaking to job R ... More
Presented by Dr. Rodney WALKER on 15 Feb 2006 at 14:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The Inner Tracker of the CMS experiment consists of approximately 20,000 sensitive modules in order to cope with the bunch crossing rate and the high particle multiplicity expected in the environment of the Large Hadron Collider. For such a big number of modules conventional methods for track-based alignment face serious difficulties because of the large number of alignment parameters and the ... More
Presented by Edmund Erich WIDL on 14 Feb 2006 at 17:40
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The Grid technology is attracting a lot of interest, involving hundreds of researchers and software engineers around the world. The characteristics of Grid demand the developing of suitable monitoring system able to obtain the significant information in order to make management decision and control system behaviour. In this paper we are going to analyse a formal declarative interpreted lan ... More
Presented by Dr. Rosa PALMIERO on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Distributed Event production and processing
This paper addresses the growing usages of high performance computing in modern computational fluid dynamics to simulate the flow-induced vibrations of cylindrical structures necessary to enhance the Reactor Safety in Nuclear plants. The study is essential to prevent the damage of steam tubes causing an accident due to the release of reactor coolant containing radioactive materials out of the ... More
Presented by Mr. Sankhadip SENGUPTA on 15 Feb 2006 at 09:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
The ATLAS detector currently under construction at CERN's Large Hadron Collider presents data handling requirements of an unprecedented scale. From 2008 the ATLAS distributed data management (DDM) system must manage tens of petabytes of event data per year, distributed around the world: the collaboration comprises 1800 physicists participating from more than 150 universities and laboratories i ... More
Presented by Dr. David CAMERON on 14 Feb 2006 at 16:20
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
The LCG is an operational Grid currently running at 136 sites in 36 countries, offering its users access to nearly 14,000 CPUs and approximately 8PB of storage [1]. Monitoring the state and performance of such a system is challenging but vital to successful operation. In this context the primary motivation for this research is to analyze LCG performance by doing a statistical analysis of the l ... More
Presented by Mrs. Mona AGGARWAL on 16 Feb 2006 at 14:20
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
The major challenges preventing the wide-scale generation of web lecture recordings include the compactness and price of the required hardware, the speed of the compression and posting operations, and the need for a human camera operator. We will report on efforts that have led to major progress in addressing each of these issues. We will describe the design, prototyping and pilot deployment o ... More
Presented by Mr. Jeremy HERR, Dr. Steven GOLDFARB on 15 Feb 2006 at 16:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
In 2005, the DZero Data Reconstruction project processed 250 tera-bytes of data on the Grid, using 1,600 CPU-years of computing cycles in 6 months. The large computational task required a high-level of refinement of the SAM-Grid system, the integrated data, job, and information management infrastructure of the RunII experiments at Fermilab. The success of the project was in part due to the abi ... More
Presented by Garzoglio GABRIELE on 13 Feb 2006 at 11:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The ATLAS detector at CERN's LHC will be exposed to proton-proton collisions at a nominal rate of 1 GHz from beams crossing at 40 MHz. A three-level trigger system will select potentially interesting events in order to reduce this rate to about 200 Hz. The first trigger level is implemented in custom-built electronics and firmware, whereas the higher trigger levels are based on software. A sys ... More
Presented by Hans VON DER SCHMITT, Hans VON DER SCHMITT on 14 Feb 2006 at 16:20
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
Job tracking, i.e. monitoring bundle of jobs or individual job behavior from submission to completion, is becoming very complicated in the heterogeneous Grid environment. This paper presents the principles of an integrating tracking solution based on components already deployed at STAR, none of which are experiment specific: a Generic logging layer and the STAR Unified Meta-Scheduler (SUM ... More
Presented by Dr. Valeri FINE on 14 Feb 2006 at 14:20
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
In current, widely deployed management schemes, intensive computing farms are locally managed by batch systems (e.g. Platform LSF, PBS/Torque, BQS, etc.). When approached from the outside, at the global (or 'grid') level, these local resource managers (LRMS) are seen as services providing at least a basic set of job operations, namely submission, status retrieval, cancellation and security cre ... More
Presented by Mr. Davide REBATTO on 15 Feb 2006 at 09:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The International Linear Collider project ILC is in a very active R&D phase where currently three different detector concepts are developed in international working groups. In order to investigate and optimize the different detector concepts and their physics potential it is highly desirable to have flexible and easy to use software tools. In this talk we present Marlin, a modular C++ applicat ... More
Presented by Frank GAEDE on 14 Feb 2006 at 16:36
Type: oral presentation Session: Online Computing
Track: Online Computing
In order to meet the requirements of ATLAS data taking, the ATLAS Trigger-DAQ system is composed of O(1000) of applications running on more than 2000 computers in a network. With such system size, s/w and h/w failures are quite often. To minimize system downtime, the Trigger-DAQ control system shall include advanced verification and diagnostics facilities. The operator should use tests and exp ... More
Presented by Andrei KAZAROV on 15 Feb 2006 at 16:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The CDF experiment has a new trigger which selects events depending on the significance of the track impact parameters. With this trigger a sample of events enriched of b and c mesons has been selected and it is used for several important physics analysis like the Bs mixing. The size of the dataset is of about 20 TBytes corresponding to an integrated luminosity of 1 fb-1 collected by CDF. CDF ... More
Presented by Dr. Donatella LUCCHESI, Dr. Francesco DELLI PAOLI on 13 Feb 2006 at 16:20
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
ALICE Event Visualization Environment (AEVE) is a general framework for visualization of detector geometry and event-related data being developed for the ALICE experiment. Its design is guided by the large raw event size (80 MBytes) and an even larger footprint of a full simulation--reconstruction pass (1.5 TBytes). An extensible pre-processing mechanism needed to reduce the data volume, colle ... More
Presented by Matevz TADEL on 15 Feb 2006 at 17:20
Type: oral presentation Session: Online Computing
Track: Online Computing
The HLT, integrating all major detectors of ALICE, is designed to analyse LHC events online. A cluster of 400 to 500 dual SMP PCs will constitute the heart of the HLT system. To synchronize the HLT with the other online systems of ALICE (Data Acquisition (DAQ), Detector Control System (DCS), Trigger (TRG)) the Experiment Control System (ECS) has to be interfaced. In order to do so, the impleme ... More
Presented by Sebastian Robert BABLOK on 14 Feb 2006 at 14:45
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The ARDA project focuses in delivering analysis prototypes together with the LHC experiments. Each experiment prototype is in principle independent but commonalities have been observed. The first level of commonality is represented by mature projects which can be effectively shared across different users. The best example is GANGA, providing a toolkit to organize users’ activity, shielding ... More
Presented by Dr. Massimo LAMANNA on 14 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The data taking of ARGO-YBJ experiment in Tibet is operational with 54 RPC clusters installed and is moving rapidly to more than 100 clusters configuration. The paper describes the processing of this phase experimental data , based on a local computer farm. The software developed for the data management, job submission and information retrieval is described together to the performance as ... More
Presented by Dr. Cristian STANESCU on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
In preparation for data taking, the ATLAS experiment has run a series of large-scale computational exercises to test and validate distributed data grid solutions under development. ATLAS experience in prototypes and production systems of Data Challenges and Combined Test Team provided various database connectivity requirements for applications: connection management, online-offline uniformit ... More
Presented by Dr. Alexandre VANIACHINE on 13 Feb 2006 at 14:20
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The Large Hadron Collider at CERN will start data acquisition in 2007. The ATLAS (A Toroidal LHC ApparatuS) experiment is preparing for the data handling and analysis via a series of Data Challenges and production exercises to validate its computing model and to provide useful samples of data for detector and physics studies. DC1 was conducted during 2002-03; the main goals were to put in plac ... More
Presented by Dr. Gilbert Poulard POULARD on 13 Feb 2006 at 16:20
Type: oral presentation Session: Online Computing
Track: Online Computing
The ATLAS experiment at the LHC will start taking data in 2007. Event data from proton—proton collisions will be selected in a three level trigger system which reduces the initial bunch crossing rate of 40 MHz at its first level trigger (LVL1) to 75 kHz with a fixed latency of 2.5 μs. The second level trigger (LVL2) collects and analyses Regions of Interest (RoI) identified by LVL1 and redu ... More
Presented by Kostas KORDAS on 14 Feb 2006 at 17:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The physics program at the LHC includes precision tests of the Standard Model (SM), the search for the SM Higgs boson up to 1 TeV, the search for the MSSM Higgs bosons in the entire parameter space, the search for Super Symmetry, sensitivity to alternative scenarios such as compositeness, large extra dimensions, etc. This requires general purpose detectors with excellent performance. ATLAS ... More
Presented by Dr. Ketevi Adikle ASSAMAGAN, PAT ATLAS on 14 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Online Computing
The ATLAS TDAQ System will be composed of 3000 processors with a few processes per processor. The Process Manager component of the TDAQ software is responsible for launching and controlling these processes. The main requirements are for robustness, availability and recoverability of the system, as well as the possibiity of full launch, control and monitoring of the TDAQ processes. This paper w ... More
Presented by Dr. marc DOBSON
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
To validate its computing model, ATLAS, one of the four LHC experiments, conducted in Q4 of 2005 a Tier-0 scaling test. The Tier-0 is responsible for prompt reconstruction of the data coming from the event filter, and for the distribution of this data and the results of prompt reconstruction to the tier-1s. Handling the unprecedented data rates and volumes will pose a huge challenge on the com ... More
Presented by Miguel BRANCO on 15 Feb 2006 at 16:40
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
In order to properly understand the data taken for an HEP Event, information external to the Event must be available. Such information includes geometry descriptions, calibrations values, magnetic field readings plus many more. CMS has chosen a unified approach to access to such information via a data model based on the concept of an 'Interval of Validity', IOV. This data model is organized i ... More
Presented by Dr. Christopher JONES on 15 Feb 2006 at 16:36
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The Solenoid Tracker At RHIC (STAR) experiment has observed luminosity fluctuations on time scales much shorter than expected during its design and construction. These operating conditions lead to rapid variations in distortions of data from the STAR TPC which are dependent upon the luminosity and planned techniques for calibrating these distortions became insufficient to provide high quality ... More
Presented by Dr. GENE VAN BUREN on 15 Feb 2006 at 14:18
Type: oral presentation Session: Plenary
Track: Plenary
on 17 Feb 2006 at 12:55
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Prof. Harvey B NEWMAN on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
The collaboration between BARC and CERN is driving a series of enhancements to ELFms [1], the fabric management tool-suite developed with support from the HEP community under CERN's coordination. ELFms components are used in production at CERN and a large number of other HEP sites for automatically installing, configuring and monitoring hundreds of clusters comprising of thousands of nodes. D ... More
Presented by Mr. William TOMLIN on 13 Feb 2006 at 11:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
We report on first experiences with building and operating an Edge Services Framework (ESF) based on Xen virtual machines instantiated via the Workspace Service available in Globus Toolkit, and developed as a joint project between EGEE, LCG, and OSG. Many computing facilities are architected with their compute and storage clusters behind firewalls. Edge Services are instantiated on a small set ... More
Presented by Abhishek Singh RANA on 15 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Computing Facilities and Networking
The next generations of large colliders and their experiments will have the advantage that groups from all over the world will participate with their competence to meet the challenges of the future. Therefore it’s necessary to become even more global than in the past, giving members the option of remote access to most controlling parts of this facilities. The experience in the past has shown ... More
Presented by Mr. Sven KARSTENSEN on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
gLite is the next generation middleware for grid computing. Born from the collaborative efforts of more than 80 people in 12 different academic and industrial research centers as part of the EGEE Project, gLite provides a bleeding-edge, best- of-breed framework for building grid applications tapping into the power of distributed computing and storage resources across the Internet. Curre ... More
Presented by Dr. Joachim FLAMMER on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
One of the most interesting challenges of the 'computing Grid' is how to administer grid resources allocation and data access, in order to obtain an effective and optimized computing usage and a secure data access. To reach this goal, a new entity has appeared, the Virtual Organization (VO), which represents a distributed community of users, accessing a distributed computing environment. Thi ... More
Presented by Mr. Gian Luca RUBINI on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Event processing applications
HERMES experiment at DESY has performed extensive measurements on diffractive production of light vector mesons (rho^0, omega, phi) in the intermediate energy region. Spin density matrix elements (SDMEs) were determined for exclusive diffractive rho^0 and phi mesons and compared with results of high energy experiments. Several methods for the extraction of SDMEs have been applied on the same ... More
Presented by Dr. Alexander BORISSOV on 13 Feb 2006 at 11:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
Visualisation of data in particle physics currently involves event displays, histograms and scatterplots. Since 1975 there has been an explosion of techniques for data visualisation driven by highly interactive computer systems and ideas from statistical graphics. This field has been driven by demands for data mining of large databases and genomics. Two key areas are direct manipulation of vis ... More
Presented by Prof. Stephen WATTS on 16 Feb 2006 at 14:54
Type: poster Session: Poster
Track: Online Computing
The ATLAS DAQ and monitoring software are currently commonly used to test detectors during the commissioning phase. In this paper, their usage in MDT and RPC commissioning is described, both at the surface pre-commissioning and commissioning stations and in the ATLAS pit. Two main components are heavily used for detector tests. The ROD Crate DAQ software is based on the ATLAS ReadOut applic ... More
Presented by Dr. Enrico PASQUALUCCI on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Event processing applications
We have developed a package that trains and applies boosted classification trees, a technology long used by the statistics community, but only recently being explored by HEP. We will discuss its design (Object-Oriented C++), and show two examples of its use: to detect single top production in DZERO events, and for background rejection in GLAST.
Presented by Toby BURNETT on 15 Feb 2006 at 09:00
Type: oral presentation Session: Online Computing
Track: Online Computing
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System (DAQ) is required to collect sufficient statistics in the short running time available per year for heavy ion and to accommodate very different re ... More
Presented by Mr. Sylvain CHAPELAND on 13 Feb 2006 at 17:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
SAM is a data handling system that provides Fermilab HEP experiments of D0, CDF and MINOS with the means to catalog, distribute and track the usage of their collected and analyzed data. Annually, SAM serves petabytes of data to physics groups performing data analysis, data reconstruction and simulation at various computing centers across the world. Given the volume of the detector data, a typ ... More
Presented by Valeria BARTSCH on 13 Feb 2006 at 16:40
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
Samples of data acquired by the STAR Experiment at RHIC are examined at various stages of processing for quality assurance (QA) purposes. As STAR continues to mature and utilize new hardware and software, it remains imperative to the experiment to work cohesively to insure the quality of STAR data so that the collaboration may continue to produce many new physics results in the efficient and t ... More
Presented by Dr. Gene VAN BUREN on 16 Feb 2006 at 15:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
We describe two illustrative cases in which Grid middleware (GridFtp, dCache and SRM) was used successfully to transfer hundreds of terabytes of data between BNL and its remote RHIC and ATLAS collaborators. The first case involved PHENIX production data transfers to CCJ, a regional center in Japan, during the 2005 RHIC run. Approximately 270TB of data, representing 6.8 billion polarized proto ... More
Presented by Dr. Dantong YU, Dr. Xin ZHAO on 14 Feb 2006 at 17:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
For the BaBar Computing Group: Two years ago, the BaBar experiment changed its event store from an object oriented database system, to one based on ROOT files. A new bookkeeping system was developed to manage the meta-data of these files. This system has been in constant use since that time, and has successfully provided the needed meta-data information for users' analysis jobs, data ma ... More
Presented by Dr. Douglas SMITH on 15 Feb 2006 at 16:40
Type: poster Session: Poster
Track: Distributed Event production and processing
For the BaBar Computing Group We describe enhancements to the BaBar Experiment's distributed Monte Carlo generation system to make use of European and North American GRID resources and present the results with regard to BaBar's latest cycle of Monte-Carlo production. We compare job success rates and manageability issues between GRID and non-GRID production and present an investigation into ... More
Presented by Dr. Chris BREW, Dr. Alessandra FORTI, Alessandra FORTI on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
For the BaBar computing group: Two years ago BaBar changed from using a database event storage technology to the use of ROOT-files. This change drastically affected the simulation production within the experiment, as well as the bookkeeping and the distribution of the data. Despite these large changes to production, events were produced as needed and on time for analysis. In fact the chan ... More
Presented by Dr. Douglas SMITH on 15 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
Fermilab provides a primary and tertiary permanent storage facility for its High Energy Physics program and other world wide scientific endeavors. The lifetime of the files in this facility, which are maintained in automated robotic tape libraries, is typically many years. Currently the amount of data in the Fermilab permanent store facility is 3.3 PB and growing rapidly. The Fermilab " ... More
Presented by Dr. Gene OLEYNIK on 15 Feb 2006 at 09:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
In 2004 the Belle Experimental Collaboration reached a critical stage in their computing requirements. Due to an increased rate of data collection an extremely large amount of simulated (Monte Carlo) data was required to correctly analyse and understand the experimental data. The resulting simulation effort consumed more CPU power than was readily available to the experiment at the host insti ... More
Presented by Marco LA ROSA on 16 Feb 2006 at 14:20
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
We report on the ongoing evaluation of new 64 Bit processors as they become available to us. We present the results of benchmarking these systems in various operating modes and also measured the power consumption. To measure the performance we use HEP and CMS specific applications including: the analysis tool ROOT (C++), the MonteCarlo generator Pythia (FORTRAN), OSCAR (C++) the GEANT 4 based ... More
Presented by Dr. Hans WENZEL on 15 Feb 2006 at 16:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The IT Group at DESY is involved in a variety of projects ranging from Analysis of High Energy Physics Data at the HERA Collider and Synchrotron Radiation facilities to cutting edge computer science experiments focused on grid computing. In support of these activities members of the IT group have developed and deployed a local computational facility which comprises many service nodes, computat ... More
Presented by Dr. Michael ERNST on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The JLab Introspection Library (JIL) provides a level of introspection for C++ enabling object persistence with minimal user effort. Type information is extracted from an executable that has been compiled with debugging symbols. The compiler itself acts as a validator of the class definitions while enabling us to avoid implementing an alternate C++ preprocessor to generate dictionary informa ... More
Presented by Dr. David LAWRENCE on 14 Feb 2006 at 14:20
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The simulation and analysis framework of the CBM collaboration will be presented. CBM (Compressed Baryonic Matter) is an experiment at the future FAIR (Facility for Antiproton and Ion Research) in Darmstadt. The goal of the experiment is to explore the phase diagram of strongly interacting matter in high-energy nucleus-nucleus collisions. The Virtual Monte Carlo concept allows performing ... More
Presented by Dr. Denis BERTINI on 14 Feb 2006 at 16:54
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
Ensuring personnel and equipment safety under all conditions, while operating the complex CERN systems, is a vital condition for CERN success. By applying accurate operating and maintenance procedures as well as executing regular safety inspections, CERN has an excellent safety record. Regular safety inspections also permit the traceability of all important events that have occurred in th ... More
Presented by Mr. Stephan PETIT on 15 Feb 2006 at 16:40
Type: poster Session: Poster
Track: Online Computing
Software agent based control system is implemented to control experiments running on the CLAS detector at Jefferson Lab. Within the CLAS experiments DAQ, trigger, detector and beam line control systems are both logically and physically separated, and are implemented independently using a common software infrastructure. CLAS experimental control system (ECS) was designed, using earlier develo ... More
Presented by Vardan GYURJYAN on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are study of known and search for new vector mesons, study of the ppbar a nnbar production cross sections in the vicinity of the threshold and sea ... More
Presented by Mr. Alexei SIBIDANOV on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Event processing applications
CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are study of known and search for new vector mesons, study of the ppbar a nnbar production cross sections in the vicinity of the threshold and sea ... More
Presented by Mr. Alexander ZAYTSEV on 13 Feb 2006 at 11:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The CMS Data Acquisition system is designed to build and filter events originating from approximately 500 data sources from the detector at a maximum Level 1 trigger rate of 100 kHz and with an aggregate throughput of 100 GByte/sec. For this purpose different architectures and switch technologies have been evaluated. Events will be built in two stages: the first stage, the FED Builder, will be ... More
Presented by Marco PIERI on 13 Feb 2006 at 14:20
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The CMS simulation based on the Geant4 toolkit and the CMS object-oriented framework has been in production for almost two years and has delivered a total of more than a 100 M physics events for the CMS Data Challenges and Physics Technical Design Report studies. The simulation software has recently been successfully ported to the new CMS Event-Data-Model based software framework. In this pape ... More
Presented by Dr. Maya STAVRIANAKOU on 14 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
CMS has chosen to adopt a distributed model for all computing in order to cope with the requirements on computing and storage resources needed for the processing and analysis of the huge amount of data the experiment will be providing from LHC startup. The architecture is based on a tier-organised structure of computing resources, based on a Tier-0 centre at CERN, a small number of Tier-1 c ... More
Presented by Dr. Jose HERNANDEZ on 15 Feb 2006 at 09:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
In preparation for the start of the experiment, CMS must produce large quantities of detailed full-detector simulation. In this presentation we will present the experiencing with running official CMS Monte Carlo simulation on distributed computing resources. We will present the implementation used to generate events using the LHC Computing Grid (LCG-2) resources in Europe, as well as the imple ... More
Presented by Dr. Pablo GARCIA-ABIA on 14 Feb 2006 at 17:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The Reconstruction Software for the CMS detector is designed to serve multiple use cases, from the online triggering of the High Level Trigger to the offline analysis. The software is based on the CMS Framework, and comprises reconstruction modules which can be scheduled independently. These produce and store event data ranging from low-level objects to objects useful for analysis on reduced D ... More
Presented by Dr. Tommaso BOCCALI on 14 Feb 2006 at 17:30
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
Packaging and distribution of experiment-specific software becomes a complicated task when the number of versions and external dependencies increases. With the advent of Grid computing, the distribution and update process must become a simple, robust and transparent step. Furthermore, one must take into account that running a particular application requires setup of the appropriate environment ... More
Presented by klaus RABBERTZ on 14 Feb 2006 at 17:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
We describe the various tools used by CMS to create and manage the packaging and distribution of software, including the various CMS software packages and the external components upon which CMS software depends. It is crucial to manage the environment to ensure that the configuration is correct, consistent, and reproducible at the many computing centres running CMS software. We describe the t ... More
Presented by Klaus RABBERTZ, Andreas NOWACK on 14 Feb 2006 at 17:20
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The most significant data challenge for CMS in 2005 has been the LCG service challenge 3 (SC3). For CMS the main purpose of the challenge was to exercise a realistic LHC startup scenario using complete experiment system, in what concerns transferring and serving data, submitting jobs and collecting their data, employing the next-generation world-wide LHC computing service. A number of sign ... More
Presented by Lassi TUURA on 16 Feb 2006 at 14:40
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The ARDA project focuses in delivering analysis prototypes together with the LHC experiments. The ARDA/CMS activity delivered a fully-functional analysis prototype exposed to a pilot community of CMS users. The current integration work of key components into the CMS system is described: the activity focuses on providing a coherent monitor layer where information from diverse sources is a ... More
Presented by Dr. Julia ANDREEVA on 13 Feb 2006 at 17:20
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
We describe a C++ software that is able to reconstruct the positions, angular orientations and internal optical parameters of any optical system described by a seamless combination of many different types of optical objects. The program also handles the propagation of uncertainties, what makes it very useful to simulate the system in the design phase. The software is currently in use by the fo ... More
Presented by Pedro ARCE on 15 Feb 2006 at 17:30
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Since October 2004, the LCG Conditions Database Project has focused on the development of COOL, a new software product for the handling of experiment conditions data. COOL merges and extends the functionalities of the two previous software implementations developed in the context of the LCG common project, which were based on Oracle and MySQL. COOL is designed to minimise the duplication o ... More
Presented by Dr. Andrea VALASSI on 13 Feb 2006 at 16:40
Type: poster Session: Poster
Track: Distributed Event production and processing
In April 2005, the LCG Conditions Database Project delivered the first production release of the COOL software, providing basic functionalities for the handling of conditions data. Since that time, several new production releases have extended the functionalities of the software. As the project is now moving into the deployment phase in Atlas and LHCb, its priorities are the optimization a ... More
Presented by Dr. Andrea VALASSI on 15 Feb 2006 at 09:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The COmmon Relational Abstraction Layer (CORAL) is a C++ software system,developed within the context of the LCG persistency framework, which provides vendor-neutral software access to relational databases with defined semantics. The SQL-free public interfaces ensure the encapsulation of all the differences that one may find among the various RDBMS flavours in terms of SQL syntax and data type ... More
Presented by Dr. Ioannis PAPADOPOULOS on 13 Feb 2006 at 16:20
Type: poster Session: Poster
Track: Distributed Data Analysis
CMS is one of the four experiments expected to take data at LHC. Order of some PetaBytes of data per year will be stored in several computing sites all over the world. The collaboration has to provide tools for accessing and processing the data in a distribuited environment, using the grid infrastructure. CRAB (Cms Remote Analysis Builder) is a user-friendly tool developed by INFN within the C ... More
Presented by Dr. Daniele SPIGA on 15 Feb 2006 at 09:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
CRAB (Cms Remote Analysis Builder) is a tool, developed by INFN within the CMS collaboration, which provides to physicists the possibility to analyze large amount of data exploiting the huge computing power of grid distributed systems. It's currently used to analyze simulated data needed to prepare the Physics Technical Design Report. Data produced by CMS are distributed among several Computin ... More
Presented by Mr. Marco CORVO on 14 Feb 2006 at 17:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Efficient and robust system for accessing computational resources and managing job operations is a key component of any Grid framework designed to support large distributed computing environment. CREAM (Computing Resource Execution And Management) is a simple, minimal system designed to provide efficient processing of a large number of requests for computation on managed resources. Requests ar ... More
Presented by Moreno MARZOLLA on 13 Feb 2006 at 11:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
After successfully deploying dCache over the last few years, the dCache team reevaluated the potential of using dCache for extremely huge and heavily used installations. We identified the filesystem namespace module as one of the components which would very likely need a redesign to cope with expected requirements in the medium term future. Having presented the initial design of Chimera dur ... More
Presented by Mr. Tigran Mkrtchyan MKRTCHYAN on 13 Feb 2006 at 16:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
Over the last years, we have experienced a growing demand for hosting java web applications. At the same time, it has been difficult to find an off-the-shelf solution that would enable load balancing, easy administration and a high level of isolation between applications hosted within a J2EE server. The architecture developed and used in production at CERN is based on a linux cluster. A ... More
Presented by Michal KWIATEK on 13 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The HEP department of the University of Manchester has purchased a 1000 nodes cluster. The cluster will be accessible to various VOs through EGEE/LCG grid middleware. One of the interesting aspects of the equipment bought is that each node has 2x250 GB disks leading to a total of aproximately 4TB of usable disk space. The space is intended to be managed using dcache and its resilience featu ... More
Presented by Dr. Alessandra FORTI on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Event processing applications
CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are study of known and search for new vector mesons, study of the ppbar a nnbar production cross sections in the vicinity of the threshold and ... More
Presented by Mr. Sergey PIROGOV on 13 Feb 2006 at 11:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The CMS silicon strip tracker (SST), comprising a sensitive area of over 200m2 and 10M readout channels, is unprecedented in its size and complexity. The readout system is based on a 128-channel analogue front-end ASIC, optical readout and an off-detector VME board, using FPGA technology, that performs digitization, zero suppression and data formatting before forwarding the detector data to th ... More
Presented by Dr. Robert BAINBRIDGE, Robert John BAINBRIDGE on 15 Feb 2006 at 17:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The Applications Area of the LCG Project is concerned with developing, deploying and maintaining that part of the physics applications software and associated supporting infrastructure software that is common among the LHC experiments. This area is managed as a number of specific projects with well-defined policies for coordination between them and with the direct participation of the primary ... More
Presented by Dr. Pere MATO on 13 Feb 2006 at 14:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Solving the 'simulation=experiment' equation, which is the ultimate task of every HEP experiment, becomes impossible without computer simulation techniques. HEP Monte Carlo simulations, traditionally written as FORTRAN codes, became complex computational projects: their rich physical content needs to be matched with the software organization of the experimental collaborations to make them a pa ... More
Presented by Mr. Piotr GOLONKA on 16 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Event processing applications
The three dimensional electrostatic field configuration in a multiwire proportional chamber (MWPC) has been simulated using an efficient boundary element method (BEM) solver set up to solve an integral equation of the first kind. To compute the charge densities over the bounding surfaces representing the system for known potentials, the nearly exact formulation of BEM has been implemented such ... More
Presented by Dr. Nayana MAJUMDAR on 13 Feb 2006 at 11:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Mathai JOSEPH on 16 Feb 2006 at 11:30
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Rajiv GAVAI on 16 Feb 2006 at 12:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Physics analyses at modern collider experiments enter a new dimension of event complexity. At the LHC, for instance, physics events will consist of the final state products of the order of 20 simultaneous collisions. In addition, a number of today’s physics questions is studied in channels with complex event topologies and configuration ambiguities occurring during event analysis. The ... More
Presented by Dr. Steffen G. KAPPLER on 14 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Event processing applications
The size and complexity of LHC experiments raise unprecedented challenges not only in terms of detector design, construction and operation, but also in terms of software models and data persistency. One of the more challenging tasks is the calibration of the 375000 Monitored Drift Tubes, that will be used as precision tracking detectors in the Muon Spectrometer of the ATLAS experiment. An accu ... More
Presented by Dr. Monica VERDUCCI on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
GridKa, the German Tier-1 center in the Worldwide LHC Computing Grid (WLCG), supports all four LHC experiments, ALICE, ATLAS, CMS and LHCb as well as currently some non-LHC high energy physics experiments. Several German and European Tier-2 sites will be connected to GridKa as their Tier-1. We present technical and organizational aspects pertaining the connection and support of the Tier-2s sit ... More
Presented by Dr. Andreas HEISS on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The University of Wisconsin campus research computing grid is an offshoot of Condor project, which is providing middle ware for many world-wide computing grids. The Grid Laboratory of Wisconsin (GLOW) and other UW based computing facilities exploit Condor technologies to provide research computing for a variety of fields including high energy physics projects on the UW campus. The Condor/GLOW ... More
Presented by Prof. Sridhara DASU on 15 Feb 2006 at 09:00
Type: oral presentation Session: Online Computing
Track: Online Computing
LHCb has an integrated Experiment Control System (ECS), based on the commercial SCADA system PVSS. The novelty of this control system is that, in addition to the usual control and monitoring of all experimental equipment, it also provides control and monitoring for software processes, namely the on-line trigger algorithms. The trigger decisions are computed by algorithms on an event filte ... More
Presented by Dr. Eric VAN HERWIJNEN on 15 Feb 2006 at 14:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
DIAL is a generic framework for distributed analysis. The heart of the system is a scheduler (also called analysis service) that receives high-level processing requests expressed in terms of an input dataset and a transformation to act on that dataset. The scheduler splits the dataset, applies the transformation to each subdataset to produce a new subdataset, and then merges these to produce t ... More
Presented by David ADAMS on 13 Feb 2006 at 14:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
Results from and progress on the development of a Data Intensive and Network Aware (DIANA) Scheduling engine primarily for data intensive sciences such as physics analysis is described. Scientific analysis tasks can involve thousands of computing, data handling, and network resources and the size of the input and output files and the amount of overall storage space allotted to a user necess ... More
Presented by Mr. Ashiq ANJUM on 15 Feb 2006 at 14:20
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
DIRAC is the LHCb Workload and Data Management System and is based on a service-oriented architecture. It enables generic distributed computing with lightweight Agents and Clients for job execution and data transfers. DIRAC code base is 99% python with all remote requests handled using the XML-RPC protocol. DIRAC is used for the submission of production and analysis jobs by the LHCb collaborat ... More
Presented by Mr. Adrian CASAJUS RAMO on 15 Feb 2006 at 16:40
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
DIRAC is the LHCb Workload and Data Management system for Monte Carlo simulation, data processing and distributed user analysis. Using DIRAC, a variety of resources may be integrated, including individual PC's, local batch systems and the LCG grid. We report here on the progress made in extending DIRAC for distributed user analysis on LCG. In this paper we describe the advances in the workload ... More
Presented by Mr. Stuart PATERSON on 14 Feb 2006 at 15:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
DIRAC is the LHCb Workload and Data Management system used for Monte Carlo production, data processing and distributed user analysis. Such a wide variety of applications requires a general approach to the tasks of job definition, configuration and management. In this paper, we present a suite of tools called a Production Console, which is a general framework for job formulation, configura ... More
Presented by Dr. Gennady KUZNETSOV on 13 Feb 2006 at 14:40
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
DIRAC is the LHCb Workload and Data Management system used for Monte Carlo production, data processing and distributed user analysis. It is designed to be light and easy to deploy which allows integrating in a single system different kinds of computing resources including stand-alone PC's, computing clusters or Grid systems. DIRAC uses the paradigm of the overlay network of “Pilot Agents”, ... More
Presented by Dr. Andrei TSAREGORODTSEV on 15 Feb 2006 at 14:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
Availability approaching 100% and response time converging to 0 are two factors that users expect of any system they interact with. Even if the real importance of these factors is a function of the size and nature of the project, todays users are rarely tolerant of performance issues with system of any size. Commercial solutions for load balancing and failover are plentiful. Citrix NetScaler ... More
Presented by Mr. Vladimir BAHYL on 13 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Hadron Collider experiments in progress at Fermilab’s Tevatron and under construction at the Large Hadron Collider (LHC) at CERN will record many petabytes of data in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed the capabilities of any one institution. The computing grid has long been recognize ... More
Presented by Prof. Patrick SKUBIC on 13 Feb 2006 at 11:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The CMS silicon tracker, consisting of about 17,000 detector modules divided into micro-strip and pixel sensors, will be the largest silicon tracker ever realized for high energy physics experiments. The detector performance will be monitored using applications based on the CMS Data Quality Monitoring (DQM) framework and running on the High-Level Trigger Farm as well as local DAQ systems. The ... More
Presented by Dr. Suchandra DUTTA, Dr. Vincenzo CHIOCHIA on 16 Feb 2006 at 15:12
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
This paper describes the integration of Storage Resource Management (SRM) technology into the grid-based analysis computing framework of the STAR experiment at RHIC. Users in STAR submit jobs on the grid using the STAR Unified Meta-Scheduler (SUMS) which in turn makes best use of condor-G to send jobs to remote sites. However, the result of each job may be sufficiently large that existing sol ... More
Presented by Dr. Eric HJORT on 14 Feb 2006 at 16:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
In the ATLAS Computing Model widely distributed applications require access to terabytes of data stored in relational databases. In preparation for data taking, the ATLAS experiment at the LHC has run a series of large-scale computational exercises to test and validate multi-tier distributed data grid solutions under development. We present operational experience in ATLAS database servi ... More
Presented by A. VANIACHINE on 13 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
DESY is one of the world-wide leading centers for research with particle accelerators and a center for research with synchrotron light. The hadron-electron collider HERA houses four experiments which are taking data and will be operated until mid 2007. DESY has been operating a LCG-based Grid infrastructure since 2004 which was set up in the context of the EU e-science Project EGEE. The H ... More
Presented by Dr. Andreas GELLRICH on 15 Feb 2006 at 09:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The CDF Experiment's control and configuration system consists of several database applications and supportive application interfaces in both Java and C++. The CDF Oracle database server runson a SunOS platform and provide both configuration data, real-time monitoring information and historical run conditions archiving. The Java applications running on the Scientific Linux operating system imp ... More
Presented by Dr. William BADGETT on 13 Feb 2006 at 16:40
Type: poster Session: Poster
Track: Distributed Data Analysis
The D-Grid initiative, following similar programs in the USA and the UK, shall help to set up a nationwide German Grid infrastructure. Within work package 3 of the HEP community Grid distributed analysis tools under usage of grid resources shall be developed. A starting point is the analysis framework ROOT. A set of abstract ROOT classes (TGrid ...) provides the user interface to enable ... More
Presented by Dr. Kilian SCHWARZ on 15 Feb 2006 at 09:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The Monte Carlo Processing Service (MCPS) package is a Python based workflow modelling and job creation package used to realise CMS Software workflows and create executable jobs for different environments ranging from local node operation to wide ranging distributed computing platforms. A component based approach to modelling workflows is taken to allow both executable tasks as well as data h ... More
Presented by Dr. Peter ELMER on 14 Feb 2006 at 17:20
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
CMS is preparing seven remote Tier-1 computing facilities to archive and serve experiment data. These centers represent the bulk of CMS's data serving capacity, a significant resource for reprocessing data, all of the simulation archiving capacity, and operational support for Tier-2 centers and analysis facilities. In this paper we present the progress on deploying the largest remote Tier-1 fa ... More
Presented by Dr. Ian FISK on 15 Feb 2006 at 15:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The library of Monte Carlo generator tools maintained by LCG (GENSER) guarantees the centralized software and physics support for the simulation of fundamental interactions, and is currently widely adopted by the LHC collaborations. While the activity in the LCG Phase I was mostly concentrating in the standardization, integration and maintenance of the existing Monte Carlo packages, more em ... More
Presented by Dr. Mikhail KIRSANOV on 16 Feb 2006 at 14:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
The traditional dissemination channels of research results, via article publishing in scientific journals, are facing a profound metamorphosis driven by the advent of the internet and broader access to electronic resources. This change is naturally leading away from the traditional publishing paradigm towards an archive-based approach in which institutional libraries organize, manage and disse ... More
Presented by Alberto PEPE on 15 Feb 2006 at 17:00
Type: poster Session: Poster
Track: Distributed Data Analysis
The German LHC computing resources are built on the Tier 1 center at Gridka in Karlsruhe and several planned Tier 2 centers. These facilities provide us with a testbed on which we can evaluate current distributed analysis tools. Various aspects of the analysis of simulated data using LCG middleware and local batch systems have been tested and evaluated. Here we present our experiences with the ... More
Presented by Dr. Johannes ELMSHEUSER on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Distributed Data Analysis
The ATLAS production system provides access to resources across several grid flavors. Based on the experiences from the last data challenge the system has evolved. While key aspect of the old system are kept (Supervisor and executors), new implementations of the components aim for a more stable and scalable operation. An important aspect is also the integration with the new data management sys ... More
Presented by Santiago GONZALEZ DE LA HOZ on 15 Feb 2006 at 09:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The CMS computing model provides reconstruction and access to recorded data of the CMS detector as well as to Monte Carlo (MC) generated data. Due to the increased complexity, these functionalities will be provided by a tier structure of globally located computing centers using GRID technologies. In the CMS baseline, user access to data is provided by the CMS Remote Analysis Builder (CRAB) an ... More
Presented by Oliver GUTSCHE on 15 Feb 2006 at 14:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Peter ELMER on 15 Feb 2006 at 11:15
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
Within 5 years CMS expects to be managing many tens of petabytes of data in tens of sites around the world. This represents more than orderof magnitude increase in data volume over existing HEP experiments. This presentation will describe the underlying concepts and architecture of the CMS model for distributed data management, including connections to the new CMS Event Data Model. The technic ... More
on 15 Feb 2006 at 17:20
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The new version 3 of the ROOT based GSI standard analysis framework GO4 (GSI Object Oriented Online Offline) has been released. GO4 provides multithreaded remote communication between analysis process and GUI process, a dynamically configurable analysis framework, and a Qt based GUI with embedded ROOT graphics. In the new version 3 a new internal object manager was developed. Its functionali ... More
Presented by Dr. Jörn ADAMCZEWSKI on 13 Feb 2006 at 14:20
Type: poster Session: Poster
Track: Software Tools and Information Systems
Packaging and distribution of experiment-specific software becomes a complicated task when the number of versions and external dependencies increases. In order to run a single application, it is often enough to create appropriate runtime environment that ensures availability of required shared objects and data files. The idea of distributing software applications based on runtime environment ... More
Presented by Natalia RATNIKOVA on 15 Feb 2006 at 09:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
High Energy and Nuclear Physics (HENP) experiments generate unprecedented volumes of data which need to be transferred, analyzed and stored. This in turn requires the ability to sustain, over long periods, the transfer of large amounts of data between collaborating sites, with relatively high throughput. Groups such as the Particle Physics Data Grid (PPDG) and Globus are developing and dep ... More
Presented by Dr. Les COTTRELL on 14 Feb 2006 at 17:20
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Periodically an experiment will reprocess data taken previously to take advantage of advances in its reconstruction code and improved understanding of the detector. Within a period of ~6 months the DØ experiment has reprocessed, on the grid, a large fraction (0.5fb-1) of the Run II data. This corresponds to some 1 billion events or 250TB of data and used raw data as input, requiring remote d ... More
Presented by Dr. Joel SNOW on 16 Feb 2006 at 14:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Eclipse is a popular, open source, development platform and application framework. It provides extensible tools and frameworks that span the complete software development lifecycle. Plugins exist for all the major parts that today make up the physicist software toolkit in ATLAS: programming environments/editors for C++ and python, browsers for CVS and SVN, networking with ssh and sftp, etc. It ... More
Presented by Wim LAVRIJSEN on 16 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
An ACL (access control list) is one of a few tools that network administrators are often using to limit access to various network objects, e.g. restrict access to the certain network areas for specific traffic patterns. The ACLs are also used to control forwarding traffic, e.g. for implementing so-called policy based routing. Nowadays demand is to do update of ACLs dynamically by programma ... More
Presented by Mr. Andrey BOBYSHEV on 15 Feb 2006 at 09:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
DESY is one of the worlds leading centers for research with particle accelerators and synchrotron light. The computer center manages a data volume of the order of 1 PB and houses around 1000 CPUs. During DESY's engagement as Tier-2 center for LHC experiments these numbers will at least double. In view of these increasing activities an improved fabric management infrastructure is being establis ... More
Presented by Dr. Mathias DE RIESE on 16 Feb 2006 at 14:20
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
dCache collaboration actively works on the implementation and improvement of the features and the grid support of dCache storage. It has delivered Storage Resource Managers (SRM) interface, GridFtp server, Resilient Manager and Interactive Web Monitoring tools. SRMs are middleware components whose function is to provide dynamic space allocation and file management of shared storage components ... More
Presented by Timur PERELMUTOV on 14 Feb 2006 at 16:40
Type: poster Session: Poster
Track: Software Components and Libraries
The most commonly deployed library for handling Secure Sockets Layer (SSL) and Transport Layer Security (TLS) is OpenSSL. The library is used by the client to negotiate connections to the server. It also offers features for caching parts of the information that is required, thus speeding up the process and the cost of renegotiation. Those features are generally not used fully. This paper ... More
Presented by Dr. Jens JENSEN on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The heterogeneity of resources in computational grids, such as the Canadian GridX1, makes application deployment a difficult task. Virtual machine environments promise to simplify this task by homogenizing the execution environment across the grid. One such environment, Xen, has been demonstrated to be a highly performing virtual machine monitor. In this work, we evaluate the applicabilit ... More
Presented by Dr. Ashok AGARWAL on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Many Goodness-of-Fit tests have been collected in a new open-source Statistical Toolkit: Chi-squared, Kolmogorov-Smirnov, Goodman, Kuiper, Cramer-von Mises, Anderson-Darling, Tiku, Watson, as well as novel weighted formulations of some tests. None of the Goodness-of-Fit tests included in the toolkit is optimal for any analysis case. Statistics does not provide a universal recipe to identify t ... More
Presented by Dr. Maria Grazia PIA, Dr. Barbara MASCIALINO, Dr. Andreas PFEIFFER, Dr. Alberto RIBON, Dr. Paolo VIARENGO on 14 Feb 2006 at 16:40
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Elizabeth SEXTON-KENNEDY on 14 Feb 2006 at 09:30
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
We describe the design of Atlantis, an event visualisation program for the ATLAS experiment at CERN, and the other supporting applications within the visualisation project, mainly focusing on the technologies employed. The ATLAS visualisation consists of several parts with Atlantis being the central application. The main purpose of Atlantis is to help visually investigate and intuitively under ... More
Presented by Zdenek MAXA on 16 Feb 2006 at 14:36
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
BOSS (Batch Object Submission System) has been developed to provide logging and bookkeeping and real-time monitoring of jobs submitted to a local farm or a grid system. The information is persistently stored in a relational database for further processing. By means of user-supplied filters, BOSS extracts the specific job information to be logged from the standard streams of the job itself and ... More
Presented by Mr. stuart WAKEFIELD on 14 Feb 2006 at 14:20
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The project “EvtGen in ATLAS” has the aim of accommodating EvtGen into the LHC-ATLAS context. As such it comprises both physics and software aspects of the development. ATLAS has developed interfaces to enable the use of EvtGen within the experiment's object-oriented simulation and data-handling framework ATHENA, and furthermore has enabled the running of the software on the LCG. Modifica ... More
Presented by Roger JONES on 13 Feb 2006 at 17:48
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The LHC Computing Grid Project (LCG) provides and operates the computing support and infrastructure for the LHC experiments. In the present phase, the experiments systems are being commissioned and the LCG Experiment Integration Support team provides support for the integration of the underlying grid middleware with the experiment specific components. The support activity during the experiment ... More
Presented by Dr. Simone CAMPANA on 14 Feb 2006 at 16:40
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
Physics analysis of large amounts of data by many users requires the usage of Grid resources. It is however important that users can see a single environment for developing and testing algorithms locally and for running on large data samples on the Grid. The Ganga job wizard, developed by LHCb and ATLAS, provides physicists such an integrated environment for job preparation, bookkeeping and ar ... More
Presented by Dr. Ulrik EGEDE on 15 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Distributed Event production and processing
The German Grid computing centre "GridKa" offers large computing and storing facilities to the Tevatron and LHC experiments, as well as BaBar and Compass. It has been the first large scale CDF cluster to adopt and use the FermiGrid software "SAM" to enable users to perform data-intensive analyses. The system has been operated on production level for about 2 years. We review the challenges and ... More
Presented by Mr. Ulrich KERZEL, Dr. Thomas KUHR on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
The ESLEA (Exploitation of Switched Lightpaths for E-science Applications) project has been working to put switched optical lightpath technology to the service of key large scientific projects. Central to the activity is the provision of services to ATLAS experiment. The project is facing the practical problems of finding the best way of interfacing the power (but also the restrictions) of ... More
Presented by Dr. Roger JONES, Mr. Brian DAVIES on 15 Feb 2006 at 09:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
The Semantic Web shows great potential in the HEP community as an aggregation mechanism for weakly structured data and a knowledge management tool for acquiring, accessing, and maintaining knowledge within experimental collaborations. FOAF (Friend-Of-A-Friend) (http://www.foaf-project.org/) is an RDFS/OWL ontology (some of the fundamental Semantic Web technologies) for expressing informati ... More
Presented by Bebo WHITE on 13 Feb 2006 at 14:20
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The ALICE Offline Project has developed a virtual interface to the detector transport code called Virtual Monte Carlo. It isolates the user code from changes of the detector simulation package and hence allows a seamless transition from GEANT3 to GEANT4 and FLUKA. Moreover, a new geometrical modeler has been developed in collaboration with the ROOT team, and successfully interfaced to the t ... More
Presented by Andreas.Morsch@cern.ch MORSCH on 14 Feb 2006 at 17:12
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The LCG [1] have adopted a hierarchical Grid computing model which has a Tier 0 centre at CERN, national Tier 1 centres and regional Tier 2 centres. The roles of the different Tier centres are described in the LCG Technical Design Report [2] and the levels of service required from each level of Tier centre is described in the LCG Memorandum of Understanding [3] . Many of the Tier 2 centres are ... More
Presented by Dr. David COLLING, Dr. Olivier VAN DER AA on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Software Tools and Information Systems
Recent Information Technology (IT) grows quickly and it is not so easy for us to adopt the software from IT into data acquisition (DAQ) because the software from the IT sometimes depends on OSs,languages and communication protocols. The dependency is not convenient to construct data acquisition software and then an experimental group makes their own DAQ software according to their own requirem ... More
Presented by Dr. Yoshiji YASU on 13 Feb 2006 at 11:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
FermiGrid is a cooperative project across the Fermilab Computing Division and its stakeholders which includes the following 4 key components: Centrally Managed & Supported Common Grid Services, Stakeholder Bilateral Interoperability, Development of OSG Interfaces for Fermilab and Exposure of the Permanent Storage System. The initial goals, current status and future plans for FermiGrid will be ... More
Presented by Dr. Chadwick KEITH on 13 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Software Components and Libraries
We want to do a short communication to present our first experience in C# and mono within an OpenScientist context. Mainly attempt to integrate Inventor within a C# context then within the native GUI API coming with C#. We want to point out too the perspectives, for example within AIDA.
Presented by Mr. Laurent GARNIER on 13 Feb 2006 at 11:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Contemporary Grids are characterized by a middleware that provides the necessary virtualization of computation and data resources for the shared working environment of the Grid. In a large-scale view, different middleware technologies and implementations have to coexist. The SOA approach provides the needed architectural backbone for interoperable environments, where different providers ca ... More
Presented by Giuseppe AVELLINO on 15 Feb 2006 at 14:20
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Monitoring activity plays an essential role in Grid Computing: it deals with the dynamics, variety and geographical distribution of Grid resources in order to measure important parameters and provide relevant information of a Grid system related to aspects such as usage, behaviour and performance. One of the basic requirements for a monitoring service is the capability of detection and notific ... More
Presented by Ms. Natascia DE BORTOLI on 15 Feb 2006 at 09:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
One of the main design challenges is the task of selecting appropriate Graphical User Interface (GUI) elements and organizing them to meet successfully the application requirements. - How to choose and assign the basic user interface elements (so-called widgets from `window gadgets') into the single panels of interactions? - How to organize these panels to appropriate levels of the applicat ... More
Presented by Mr. Fons RADEMAKERS on 13 Feb 2006 at 16:40
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
During this session we will describe and demonstrate the MonALISA (MONitoring Agents using A Large Integrated Services Architecture) and the new enhanced VRVS (Virtual Room Videoconferencing System) systems, and their integration to provide a next generation of collaboration system called EVO. The melding of these two systems creates a distributed intelligent system that provides an efficient ... More
Presented by Mr. Philippe GALVEZ on 16 Feb 2006 at 14:40
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
With its increasing data samples, the RHIC/STAR experiment has faced a challenging data management dilemma: solutions using cheap disks attached to processing nodes have rapidly become economically beneficial over standard centralized storage. At the cost of data management, the STAR experiment moved to a multiple component locally distributed data model rendered viable by the introduction of ... More
Presented by Mr. Pavel JAKL on 15 Feb 2006 at 16:00
Type: oral presentation Session: Public Lecture
Track: Plenary
Presented by Wolfgang VON RUEDEN on 16 Feb 2006 at 17:00
Type: oral presentation Session: Online Computing
Track: Online Computing
At the upcoming new Facility for Antiproton and Ion Research FAIR at GSI the Compressed Baryonic Matter experiment CBM requires a new architecture of front-end electronics, data acquisition, and event processing. The detector systems of CBM are a Silicon Tracker System, RICH detectors, a TRD, RPCs, and an electromagnetic calorimeter. The envisioned interaction rate of 10~MHz produces a data ra ... More
Presented by Dr. Hans G. ESSEL on 13 Feb 2006 at 16:20
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
Ganga is a lightweight, end-user tool for job submission and monitoring and provides an open framework for multiple applications and submission backends. It is developed in a joint effort in LHCb and ATLAS. The main goal of Ganga is to effectively enable large-scale distributed data analysis for physicists working in the LHC experiments. Ganga offers simple, pleasant and consistent user exper ... More
Presented by Karl HARRISON on 15 Feb 2006 at 14:40
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
GEANT4e is a package of the GEANT4 Toolkit that allows to propagate a track with its error parameters. It uses the standard GEANT4 code to propagate the track and for the track propagation it makes an helix approximation (with the step controlled by the user) using the same equations as GEANT3/GEANE. We present here a first working prototype of the GEANT4e package and compare its results and p ... More
Presented by Mr. Pedro ARCE on 13 Feb 2006 at 14:36
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
An object-oriented package for parameterizing electromagnetic showers in the framework of the Geant4 toolkit has been developed. This parameterization is based on the algorithms in the GFLASH package (implemented in Geant3 / FORTRAN), but has been adapted to the new simulation context of Geant4. This package can substitute the full tracking of high energy electrons/positrons(normally form abov ... More
Presented by Joanna WENG on 14 Feb 2006 at 14:36
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
We will describe the architecture and implementation of the new accounting service for the Open Science Grid. Gratia's main goal is to provide the OSG stakeholders with a reliable and accurate set of views of the usage of ressources across the OSG. Gratia implements a service oriented, secure framework for the necessary collectors and sensors. Gratia also provides repositories and access tool ... More
Presented by Mr. Philippe CANAL on 15 Feb 2006 at 16:20
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
While remote control of, and data collection from, instrumentation was part of the initial Grid concept most recent Grid developments have concentrated on the sharing of distributed computational and storage resources. The GRIDCC project is working to bring instrumentation back to the Grid alongside compute and storage resources. To this end we have defined an Instrument Element (IE) as ... More
Presented by Dr. David COLLING on 15 Feb 2006 at 09:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Several HENP laboratories in Paris region have joined together to provide an LCG/EGEE Tier2 center. This resource, called GRIF, will focus on LCG experiments but will also be opened to EGEE users from other disciplines and to local users. It will provide resources for both analysis and simulation and offer a large storage space (350 TB planned by end of 2007). This Tier2 will have resources ... More
Presented by Mr. Michel JOUVIN on 13 Feb 2006 at 16:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The complexity of the Geant4 code requires careful testing of all of its components, especially before major releases. In this talk, we will concentrate on the recent development of an automatic suite for testing hadronic physics in high energy calorimetry applications. The idea is to use a simplified set of hadronic calorimeters, with different beam particle types, and various beam energies, ... More
Presented by Dr. Alberto RIBON on 13 Feb 2006 at 16:18
Type: poster Session: Poster
Track: Event processing applications
The Muon Digitization is the simulation of the Raw Data Objects (RDO), or the electronic output, of the Muon Spectrometer. It has been recently completely re-written to run within the Athena framework and to interface with the Geant4 Muon Spectrometer detector simulation. The digitization process consists of two steps: in the first step, the output of the detector simulation, henceforth ref ... More
Presented by Daniela REBUZZI on 13 Feb 2006 at 11:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
Geant4 has become an established tool, in production for the majority of LHC experiments during the past two years, and in use in many other HEP experiments and for applications in medical, space and other fields. Improvements and extensions to its capabilities continue, while its physics modeling are refined and results are accumulating for its validation for a variety uses. An overview of ... More
Presented by Dr. John APOSTOLAKIS on 13 Feb 2006 at 14:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The quantitative results of a study concerning Geant4 simulation in a distributed computing environment (local farm and LCG GRID) are presented. The architecture of the system, based on DIANE, is presented; it allows to configure a Geant4 application transparently for sequential execution (on a single PC), and for parallel execution on a local PC farm or on the GRID. Quantitative results conce ... More
Presented by Dr. Maria Grazia PIA, Dr. Susanna GUATELLI, Dr. Patricia MENDEZ LORENZO, Mr. Jakub MOSCICKI on 15 Feb 2006 at 14:20
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The Geometry Description Markup Language (GDML) is a specialised XML-based language designed as an application-independent persistent format for describing the detector geometries. It serves to implement 'geometry trees' which correspond to the hierarchy of volumes a detector geometry can be composed of, and to allow to identify the position of individual solids, as well as to describe the mat ... More
Presented by Dr. Witold POKORSKI on 15 Feb 2006 at 14:20
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Gled is an OO research framework for fast prototyping of applications in distributed and multi-threaded envirnoments with support for direct data interaction and dynamic visualization. It is an extension of the ROOT framework and thus inherits its core features, including object serialization, versatile I/O infrastructure (files with inner directory structures, trees, rootd), CINT -- the C/C++ ... More
Presented by Matevz TADEL on 15 Feb 2006 at 17:40
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
Higher instantaneous luminosity of the Tevatron Collider forces large increases in computing requirements for CDF experiment which has to be able to cover future needs of data analysis and MC production. CDF can no longer afford to rely on dedicated resources to cover all of its needs and is therefore moving toward shared, Grid, resources. CDF has been relying on a set of CDF Analysis Farms (C ... More
Presented by Subir SARKAR on 14 Feb 2006 at 14:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
The organization and management of the user support in a global e-science computing infrastructure such as the Worldwide LHC Computing Grid (WLCG) is one of the challenges of the grid. Given the widely distributed nature of the organization, and the spread of expertise for installing, configuring, managing and troubleshooting the grid middleware services, a standard centralized model could not ... More
Presented by Dr. Flavia DONNO, Dr. Marco VERLATO on 13 Feb 2006 at 17:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Gang CHEN on 16 Feb 2006 at 09:30
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Ken MIURA on 16 Feb 2006 at 09:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Simon LIN on 16 Feb 2006 at 10:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
We present a report on Grid activities in Pakistan over the last three years and conclude that there is significant technical and economic activity due to the participation in Grid research and development. We started collaboration with participation in the CMS software development group at CERN and Caltech in 2001. This has led to the current setup for CMS production and the LCG Grid deployme ... More
Presented by Prof. Arshad ALI on 13 Feb 2006 at 16:20
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Data management has proved to be one of the hardest jobs to do in a the grid environment. In particular, file replication has suffered problems of transport failures, client disconnections, duplication of current transfers and resultant server saturation. To address these problems the globus and gLite grid middlewares offer new services which improve the resiliancy and robustness of file re ... More
Presented by Dr. Graeme A STEWART on 14 Feb 2006 at 15:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
Simulations have been performed with the grid simulator OptorSim using the expected analysis patterns from the LHC experiments and a realistic model of the LCG at LHC startup, with thousands of user analysis jobs running at over a hundred grid sites. It is shown, first, that dynamic data replication plays a significant role in the overall analysis throughput in terms of optimising job throughp ... More
Presented by Caitriana NICHOLSON on 13 Feb 2006 at 16:00
Type: poster Session: Poster
Track: Distributed Event production and processing
Since CHEP2005, the LHC Computing Grid (LCG) has grown from 30 sites to over 160 sites and this has increased the load on the informations system. This paper describes the recent changes to information system that were necessary to keep pace with the expanding grid. The performance of the a key component, the Berkley Database Information Index (BDII), is given special attention. During deploym ... More
Presented by Mr. Laurence FIELD on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
As a result of the interoperations activity between LHC Computing Grid (LCG) and Open Science Grid (OSG), it was found that the information and monitoring space within these grids is a crowded area with many closed end-to-end solutions that do not interoperate. This paper gives the current overview of the information and monitoring space within these grids and tries to find overlapping areas t ... More
Presented by Mr. Laurence FIELD on 14 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
This paper describes the introduction of Relation Grid Monitoring Architecture (R-GMA) into the LHC Computing Grid (LCG) as a production quality monitoring system and how, after an initial period of production hardening, it performed during the LCG Service Challenges. The results from the initial evaluation and performance tests are presented as well as the process of integrating R-GMA into t ... More
Presented by Mr. Laurence FIELD on 13 Feb 2006 at 11:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Open Science Grid (OSG) and LHC Computing Grid (LCG) are two grid infrastructures that were built independently on top of a Virtual Data Toolkit (VDT) core. Due to the demands of the LHC Virtual Organizations (VOs), it has become necessary to ensure that these grids interoperate so that the experiments can seamlessly use them as one resource. This paper describes the work that was necessary to ... More
Presented by Mr. Laurence FIELD on 14 Feb 2006 at 14:40
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
The paper reports on the evolution of operational model which was set up in the "Enabling Grids for E-sciencE" (EGEE) project, and on the implications of Grid Operations in LHC Computing Grid (LCG). The primary tasks of Grid Operations cover monitoring of resources and services, notification of failures to the relevant contacts and problem tracking through a ticketing system. Moreover, an ... More
Presented by Mr. Piotr NYCZYK, Ms. Helene CORDIER, Mr. Gilles MATHIEU on 13 Feb 2006 at 16:40
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Piergiorgio CERELLO on 16 Feb 2006 at 11:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
The Grid paradigm enables the coordination and sharing of a large number of geographically-dispersed heterogeneous resources that are contributed by different institutions. These resources are organized into virtual pools and assigned to group of users. The monitoring of such a distributed and dynamic system raises a number of issues like the need for dealing with administrative boundaries, th ... More
Presented by Mr. Sergio ANDREOZZI on 13 Feb 2006 at 17:40
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
GridX1, a Canadian computational Grid, combines the resources of various Canadian research institutes and universities through the Globus Toolkit and the CondorG resource broker (RB). It has been successfully used to run ATLAS and BaBar simulation applications. GridX1 is interfaced to LCG through a RB at the TRIUMF Laboratory (Vancouver), which is an LCG computing element, and ATLAS jobs are r ... More
Presented by Dr. Ashok AGARWAL on 13 Feb 2006 at 17:40
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Ruth PORDES on 15 Feb 2006 at 10:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
The LHC Computing Grid (LCG) connects together hundreds of sites consisting of thousands of components such as computing resources, storage resources, network infrastructure and so on. Various Grid Operation Centres (GOCs) and Regional Operations Centres (ROCs) are setup to monitor the status and operations of the grid. This paper describes Gridview, a Grid Monitoring and Visualization Too ... More
Presented by Mr. Rajesh KALMADY on 13 Feb 2006 at 17:20
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
A common problem in particle physics is the requirement to reproduce comparisons between data and theory when the theory is a (general purpose) Monte Carlo simulation and the data are measurements of final state observables in high energy collisions. The complexity of the experiments, the obervables and the models all contribute to making this a highly non-trivial task. We describe an exist ... More
Presented by Dr. Ben WAUGH on 16 Feb 2006 at 14:20
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
Accurate modelling of hadron interactions is essential for the precision analysis of data from the LHC. It is therefore imperative that the predictions of Monte Carlos used to model this physics are tested against relevant existing and future measurements. These measurements cover a wide variety of reactions, experimental observables and kinematic regions. To make this process more reliable an ... More
Presented by Dr. Andy BUCKLEY, Andy BUCKLEY on 13 Feb 2006 at 16:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
Setting up the infrastructure to manage a software project can easily become more work than writing the software itself. A variety of useful open-source tools, such as Web-based viewers for version control systems, "wikis" for collaborative discussions and bug-tracking systems are available but their use in high-energy physics, outside large collaborations, is small. We introduce the CEDAR ... More
Presented by Dr. Andy BUCKLEY, Andy BUCKLEY on 16 Feb 2006 at 14:20
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
Today we can have huge datasets resulting from computer simulations (CFD, physics, chemistry etc) and sensor measurements (medical, seismic and satellite). There is exponential growth in computational requirements in scientific research. Modern parallel computers and Grid are providing the required computational power for the simulation runs. The rich visualization is essential in interpreting ... More
Presented by Mr. Dinesh SARODE on 14 Feb 2006 at 14:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
Evolutionary Algorithms, with Genetic Algorithms (GA) and Genetic Programming (GP) as the most known versions, have a gradually increasing presence in High Energy Physics. They were proven successful in solving problems such as regression, parameter optimisation and event selection. Gene Expression Programming (GEP) is a new evolutionary algorithm that combines the advantages of both GA and GP ... More
Presented by Dr. Liliana TEODORESCU on 15 Feb 2006 at 16:18
Type: poster Session: Poster
Track: Event processing applications
The CMS detector is a general purpose experiment for the LHC. At the designed maximum luminosity more than 10**9 events/second will be produced, while the data acquisition system will be able to manage 100 Hz bandwidth. The trigger strategy for CMS is organised in 2 steps: a first level hardware trigger is implemented taking advantage of the fast response dectors, as the mu-chambers and the ca ... More
Presented by Dr. Livio FANO' on 15 Feb 2006 at 09:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Alan GARA on 14 Feb 2006 at 12:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
In the increasingly distributed collaborations of today's experiments, there is a need to bring people together and manage all discussions. The main ways for doing this on-line are the use of e-mail or web forums. HyperNews is a discussion management system which bridges these two, by including the use of e-mail for input, but also archiving the discussions in easy to access web pages. The dis ... More
Presented by Dr. Douglas SMITH on 15 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Software Components and Libraries
IGUANA is a well-established generic interactive visualisation framework based on a C++ component model and open-source graphics products. We describe developments since the last CHEP, including: the event display toolkit, with examples from CMS and D0; the generic IGUANA visualisation system for GEANT4; integration of ROOT and Hippoplot with IGUANA; and a new lightweight and portable IGUANA ... More
Presented by Dr. Lucas TAYLOR on 15 Feb 2006 at 09:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The ATLAS Inner Detector is composed of a pixel detector (PIX), a silicon strip detector (SCT) and a Transition radiation tracker (TRT). The goal of the algorithm is to align the silicon based detectors (PIX and SCT) using a global fit of the alignment constants. The total number of PIX and SCT silicon modules is about 35000, leading to many challenges. The current presentation will focus on ... More
Presented by Adlene HICHEUR on 15 Feb 2006 at 17:12
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Securely authorizing incoming users with appropriate privileges on distributed grid computing resources is a difficult problem. In this paper we present the work of the Open Science Grid Privilege Project which is a collaboration of developers from universities and national labs to develop an authorization infrastructure to provide finer grained authorization consistently to all grid services ... More
Presented by Abhishek SINGH RANA on 15 Feb 2006 at 16:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Computer clusters at universities are usually shared among many groups. As an example, the Linux cluster at the "Institut fuer Experimentelle Kernphysik" (IEKP), University of Karlsruhe, is shared between working groups of the high energy physics experiments AMS, CDF and CMS, and has successfully been integrated into the SAM grid of CDF and the LHC computing grid LCG for CMS while it still sup ... More
Presented by Anja VEST on 14 Feb 2006 at 14:20
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The LHC's Computing Grid (LCG) middleware interfaces at each site with local computing resources provided by a batch system. However, currently only the PBS/Torque, LSF and Condor resource management systems are supported out of the box in the middleware distribution. Therefore many computing centers serving scientific needs other than HEP, which in many cases use other batch systems like Sun' ... More
Presented by Dr. Ariel GARCIA on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Software Components and Libraries
We want to do a short communication of a job done at LAL about integrating the graphviz library within the OnX environment. graphviz is a well known library good at visualizing a scene containing boxes connected by lines. The strength of this library is in the routing algorithms that permit to connect boxes. For example, graphviz is used by Doxygen to produce class diagrams. We want to present ... More
Presented by Mr. Laurent GARNIER on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
We describe how a new programming paradigm dubbed AJAX (Asynchronous Javascript and XML) has enabled us to develop highly-performant web-based graphics applications. Specific examples are shown of our web clients for: CMS Event Display (real-time Cosmic Challenge), remote detecotr monitoring with ROOT displays, and performat 3D displays of GEANT4 descriptions of LHC detectors. The Web-client p ... More
Presented by Mr. Giulio EULISSE on 14 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Distributed Event production and processing
CDF has recently changed its data handling system from the DFC (Data File Catalogue) system to the SAM (Sequential Access to Metadata) system. This change was done as a preparation for distributed computing because SAM can handle distributed computing and provides mechanisms which enable it to work together with GRID systems. Experience shows that the usage of a new data handling system inc ... More
Presented by Valeria BARTSCH on 13 Feb 2006 at 11:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
Taking the implementation of ZOPE/ZMS at DESY as an example we will show and discuss various approaches and procedures to introduce a Content Management System in a HEP Institute. We will show how requirements were gathered to make decisions regarding software and hardware. How existing Systems and management procedures needed to be taken into consideration. How the project was originall ... More
Presented by Mr. Carsten GERMER on 15 Feb 2006 at 16:20
Type: poster Session: Poster
Track: Computing Facilities and Networking
To satisfy the requirements of US-CMS, D0, CDF, SDSS and other experiments, Fermilab has established an optical path to the StarLight exchange point in Chicago. It gives access to multiple experimental networks, such as UltraScience Net, UltraLight, UKLight, and others, with very high bandwidth capacity but generally sub- production level service. The ongoing LambdaStation project is deve ... More
Presented by Mr. Andrey BOBYSHEV on 15 Feb 2006 at 09:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The CMS experiment is travelling its path towards the real LHC data handling by building and testing its Computing Model through daily experience on production-quality operations as well as in challenges of increasing complexity. The capability to simultaneously address both these complex tasks on a regional basis - e.g. within INFN - relies on the quality of the developed tools and related kn ... More
Presented by Dr. Daniele - on behalf of CMS Italy Tier-1 and Tier-2's BONACORSI on 14 Feb 2006 at 17:40
Type: oral presentation Session: Online Computing
Track: Online Computing
The control systems of the LHC experiments are built using the common commercial product: PVSS II (from the ETM company). The JCOP Framework Project delivers a set of common tools built on top of, or extending the functionality of, PVSS (such as the control for widely used hardware, a Finite State Machine (FSM) toolkit, access control management, cooling and ventilation application) which ... More
Presented by Piotr GOLONKA on 16 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
In preparation of the Grid for LHC start-up, and as part of the early production service (under the UK GridPP project), we calculate efficiencies for jobs submitted to the RAL Tier-1 Batch Farm. Early usage of the Farm was characterised by high occupancy, but low efficiency of Grid jobs, but improvement has been observed over the last six months. This behaviour has been examined by calculati ... More
Presented by Dr. Matthew HODGES on 15 Feb 2006 at 09:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
We present the architecture and implementation of a bi-directional system for monitoring long-running jobs on large computational clusters. JobMon comprises an asyncronous intra-cluster communication server and a Clarens web service on a head node, coupled with a job wrapper for each monitored job to provide monitoring information both periodically and upon request. The Clarens web service pro ... More
Presented by Dr. Conrad STEENBERG on 14 Feb 2006 at 16:40
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Storing and accessing large volumes of data across geographically separated locations or cutting across labs and universities in a transparent, reliable fashion is a difficult problem. There is urgency to this problem with the commissioning of the LHC around the corner (2007). The primary difficulties that need to be over come in order to address this problem are policy driven secure access, m ... More
Presented by Dr. Surya PATHAK on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Introducing changes to a working high-performance computing environment is typically both necessary and risky. Testing these changes can be highly manpower intensive. L-TEST supplies a framework that allows the testing of complex distributed systems with reduced configuration. It reduces setting up a test to implementing the specific tasks for that test. L-TEST handles three jobs that must be ... More
Presented by Mr. Laurence DAWSON on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The LCG Distributed Deployment of Databases (LCG 3D) project is a joint activity between LHC experiments and LCG tier sites to co-ordinate the set-up of database services and facilities for relational data transfers as part of the LCG infrastructure. The project goal is to provide a consistent way of accessing database services at CERN tier 0 and collaborating LCG tier sites to achieve a more ... More
Presented by Dr. Dirk Duellmann DUELLMANN on 15 Feb 2006 at 16:20
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
I report on the findings and recommendations of the LCG Project's Requirements and Technical Assessment Group (RTAG 12) on Collaborative Tools for the LHC. A group comprising representatives of the LHC collaborations, CERN IT and HR, and leading experts in the field of collaborative tools evaluated the requirements of the LHC, current practices, and expected future usage, in comparison with th ... More
Presented by Dr. Steven GOLDFARB on 16 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
LCG and ARC are two of the major production-ready Grid middleware solutions being used by hundreds of HEP researchers every day. Even though the middlewares are based on same technology, there are substantial architectural and implementational divergencies. An ordinary user faces difficulties trying to cross the boundaries of the two systems: ARC clients so far have not been capable accessing ... More
Presented by Dr. Michael GRONAGER on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Distributed Event production and processing
The LCG-RUS project implemented the Global Grid Forum's Resource Usage Service standard and made grid resources for LHC accountable in a common schema (GGF-URWG). This project is a part of UK e-Science programme with the purpose of staging grid computing from e-Research to computational market. The LCG-RUS is a complementary work for the precedor MCS (Market for Computational Service) RUS proj ... More
Presented by Akram KHAN on 15 Feb 2006 at 09:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Martin PURSCHKE on 14 Feb 2006 at 10:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
Besides a brief overview of the GridKa private and public LAN network, the integration into the LHC-OPN network as well as the links to the T2 sites will be presented in the view of the physical network layout as well as there higher protocol layer implementations. Results about the feasibility discussion of dynamical routes for all connections of FZK including all different types the LHC ... More
Presented by Bruno HOEFT on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to allow high bandwidth distribution of data across the grid in accordance with the computing model. To enable reliable bulk replication of data, LHCb's DIRAC system has been integrated with gLite's File Transfer Service middleware component to make use of dedicated network links ... More
Presented by Mr. Andrew Cameron SMITH on 13 Feb 2006 at 11:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
High Energy Physics collaborations consist of hundreds to thousands of physicists and are world-wide in scope. Experiments and applications now running, or starting soon, need the data movement capabilities now available only on advanced and/or experimental networks. The Lambda Station project steers selectable traffic through site infrastructure and onto these "high-impact" wide-area netw ... More
Presented by Mr. Andrey BOBYSHEV on 15 Feb 2006 at 14:20
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The HEP department of the University of Manchester has purchased a 1000 nodes cluster. The cluster will be accessible to various VOs through EGEE/LCG grid middleware. In this talk we will describe the management, security and monitoring setup we have chosen for the administration of the cluster with minimum effort and mostly from remote. From remote power up to centralised installation and upd ... More
Presented by Mr. Colin MOREY on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
During last few years ATLAS has ran a serie of Data Challenges producing simulated data used to understand the detector performace. Altogether more than 100 terabytes of useful data are now spread over few dozens of storage elements on the GRID. With the emergence of Tier1 centers and constant restructuring of storage elements there is a need to consolidate the data placement in a more optimal ... More
Presented by Dr. Pavel NEVSKI on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The Brookhaven RHIC/ATLAS Computing Facility serves as both the tier-0 computing center for RHIC and the tier-1 computing center for ATLAS in the United States. The increasing challenge of providing local and grid-based access to very large datasets in a reliable, cost-efficient and high-performance manner, is being addressed by a large-scale deployment of dCache, the distributed disk caching ... More
Presented by Ms. Zhenping LIU, Dr. Ofer RIND on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The latencies induced by network communication often play a big role in reducing the performances of systems which access big amounts of data in a distributed environment. The problem is present in Local Area Networks, but in Wide Area Networks is much more evident. It is generally perceived as a critical problem which makes very difficult to get access to remote data. However, a more detailed ... More
Presented by Mr. Fabrizio FURANO on 15 Feb 2006 at 16:20
Type: poster Session: Poster
Track: Computing Facilities and Networking
As part of the DOE SciDAC "National Infrastructure for Lattice Gauge Computing" and DOE LQCD Projects, Fermilab builds and operates production clusters for lattice QCD simulations for the US community. We currently operate two clusters: a 128-node Pentium 4E Myrinet cluster, and a 520-node Pentium 640 Infiniband cluster. We discuss the operation of these systems and examine their performanc ... More
Presented by Dr. Donald HOLMGREN on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
It is broadly admitted that grid technologies have to deal with heterogeneity in both computational and storage resources. In the context of grid operations, heterogeneity is also a major concern, especially for worldwide grid projects as LCG and EGEE. Indeed, the usage of various technologies, protocols and data formats induces complexity. As learned from our experience on participating to th ... More
Presented by Mr. Sylvain REYNAUD on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The increasing instantaneous luminosity of the Tevatron collider will soon cause the computing requirements for data analysis and MC production to grow larger than the dedicated CPU resources that will be available. In order to meet future demands, CDF is investing in shared, Grid, resources. A significant fraction of opportunistic Grid resources will be available to CDF before LHC era starts ... More
Presented by Dr. Armando FELLA on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
We describe experiences and lessons learned from over a year of nearly continuous running of managed production on Grid3 for the ATLAS data challenges. Two major phases of production were peformed: the first, large scale GEANT based Monte Carlo simulations ("DC2") were followed by extensive production for the ATLAS "Rome" physics workshop incorporating several new job types (digitization, rec ... More
Presented by Dr. James SHANK on 15 Feb 2006 at 17:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The SAM data handling system has been deployed successfully by the Fermilab D0 and CDF experiments, managing Petabytes of data and millions of files in a Grid working environment. D0 and CDF have large computing support staffs, have always managed their data using file catalog systems, and have participated strongly in the development of the SAM product. But we think that SAM's long term viabi ... More
Presented by Arthur KREYMER on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The detector and collider upgrades for the HERA-II running at DESY have considerably increased the demand on computing resources for Monte Carlo production for the ZEUS experiment. To close the gap, an automated production system capable of using Grid resources has been developed and commissioned. During its first year of operation, 400 000 Grid jobs were submitted by the production system ... More
Presented by Dr. Hartmut STADIE on 15 Feb 2006 at 16:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Ashok JHUNJHUNWALA on 13 Feb 2006 at 12:00
Type: oral presentation Session: Plenary
Track: Plenary
Grid Computing technologies are transforming the scientific and enterprise computing in a big way. Especially in the different verticals like Life Sciences, Energy, Finance, there is tremendous pressure to reduce cost and enhance productivity. Grid allows linking up as many processors, storage and/or memory of distributed computers to make more efficient use of all available computing resource ... More
Presented by Anirban CHAKRABARTI on 17 Feb 2006 at 10:45
Type: poster Session: Poster
Track: Distributed Event production and processing
The Shahkar Runtime Execution Environment Kit (ShREEK) is a threaded workflow execution tool designed to run and intelligently manage arbitrary task workflows within a batch job. The Kit consists of three main components, an executor that runs tasks, a control point system to allow reordering of the workflow during execution and a thread based pluggable monitoring framework that offers both ev ... More
Presented by Dr. David EVANS on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
gLite is the next generation middleware for grid computing. Born from the collaborative efforts of more than 80 people in 12 different academic and industrial research centers as part of the EGEE Project, gLite provides a bleeding-edge, best-of-breed framework for building grid applications tapping into the power of distributed computing and storage resources across the Internet. Currently, gL ... More
Presented by Mr. Marian ZUREK on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Based on experiences from the last 18 months of UK Particle Physics Grid (GridPP) operation, this paper examines several key areas for the success of the LHC Computing Grid. Among these is the necessity of establishing useful metrics (from job level to overall operational), accurate monitoring at both the grid and local fabric levels, and mechanisms to rapidly address potentially or actually f ... More
Presented by Dr. Jeremy COLES on 15 Feb 2006 at 09:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
Efficient hierarchical storage management of small size files continues to be a challenge. Storing such files directly on tape-based tertiary storage leads to extremely low operational efficiencies. Commercial tape virtualization products are few, expensive and only proven in mainframe environments. Asking the users to deal with the problem by “bundling” their files leads to a plethora of ... More
Presented by Prof. Manuel DELFINO REZNICEK on 15 Feb 2006 at 14:40
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
In 2004, a full slice of the ATLAS detector was tested for 6 months in the H8 experimental area of the CERN SPS, in the so-called Combined Test Beam, with beams of muons, pions, electrons and photons in the range 1 to 350 GeV. Approximately 90 million events were collected, corresponding to a data volume of 4.5 terabytes. The importance of this exercise was two-fold: for the first time the who ... More
Presented by Dr. Frederik ORELLANA on 13 Feb 2006 at 14:40
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
It is important to know the Quality of Service offered by nodes in a cluster both for users and load balancing programs like LSF, PBS and CONDOR for submitting a job on to a given node. This will help in achieving optimal utilization of nodes in a cluster. Simple metrics like load average, memory utilization etc do not adequately describe load on the nodes or Quality of Service (QoS) experienc ... More
Presented by Mr. Rohitashva SHARMA on 15 Feb 2006 at 14:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
In the distributed computing world of heterogeneity, sites may have from the bare minimum Globus package available to a plethora of advanced services. Moreover, sites may have restrictions and limitations which need to be understood by resource brokers and planner in order to take the best advantage of resource and computing cycles. Facing this reality and to take full advantage of any avail ... More
Presented by Mr. Levente HAJDU on 15 Feb 2006 at 17:40
Type: poster Session: Poster
Track: Distributed Event production and processing
High Energy Physics analysis is often performed on midrange computing clusters (10-50 machines) in relatively small physics groups (3-10 physicists). Such clusters are usually built from commodity equipment and are running under one of several Linux flavors. In an enviornment of limited resources, it is important to choose "right" cluster architecture to achieve maximum performance. We will ... More
Presented by Mr. Andrey SHEVEL on 15 Feb 2006 at 09:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
ENSTORE is a very successful petabyte-scale mass storage system developed at Fermilab. Since its inception in the late 1990s, ENSTORE has been serving the Fermilab community, as well as its collaborators, and now holds more than 3 petabytes of data on tape. New data is arriving at an ever increasing rate. One practical issue that we are confronted with is: storage technologies have been evolvi ... More
Presented by Dr. Chih-Hao HUANG on 14 Feb 2006 at 16:20
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The MonaLISA (Monitoring Agents in A Large Integrated Services Architecture) system provides a distributed service for monitoring, control and global optimization of complex grid systems and networks for high energy physics, and many other fields of data-intensive science. It is based on an ensemble of autonomous multi-threaded, agent-based subsystems which are registered as dynamic services a ... More
Presented by Dr. Iosif LEGRAND on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Distributed Event production and processing
The presented monitoring framework builds on the experience gained during the ATLAS Data Challenge 2 and Rome physics workshop productions. During these previous productions several independent monitoring tools were created. Although these tools were created to some degree in isolation they provided a good degree of complementary functionality and are taken as a basis for the current framework ... More
Presented by Dr. john KENNEDY on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The improvements of the peak instantaneous luminosity of the Tevatron Collider will give CDF up to 2 fb-1 of new data every year, forcing the collaboration to increase proportionally the amount of Monte Carlo data it produces. This is in turn forcing the CDF collaboration to move beyond the dedicated resources it is using today and start exploiting Grid resources. Monte Carlo production was th ... More
Presented by Dr. Francesco DELLI PAOLI
Type: poster Session: Poster
Track: Online Computing
Building a software repository of simulation and reconstruction tools for a future International Linear Collider (ILC) detector we started with applications based on code used in the LEP experiments with Fortran and C as programming languages. All future software development for the ILC is done using modern OO languages, mainly C++ and Java. But for comparisons and providing a smooth transitio ... More
Presented by Harald VOGT on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
R-GMA is a relational implementation of the GGF's Grid Monitoring Architecture (GMA). In some respects it can be seen as a virtual database (VDB), supporting the publishing and retrieval of time-stamped tuples. The scope of an R-GMA installation is defined by its schema and registry. The schema holds the table definitions and, in future, the authorization rules. The registry holds a list of th ... More
Presented by Mr. A.J. WILSON on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Event processing applications
The ATLAS detector, currently being installed at CERN, is designed to make precise measurements of 14 TeV proton-proton collisions at the LHC, starting in 2007. Arguably the clearest signatures for new physics, including the Higgs Boson and supersymmetry, will involve the production of isolated final-stated muons. The identification and precise reconstruction of muons are performed using a com ... More
Presented by Stephane WILLOCQ
Type: poster Session: Poster
Track: Online Computing
In the ATLAS experiment, fast calibration of the detector is vital to feed prompt data reconstruction with fresh calibration constants. We present the use case of the muon detector, where an high rate of muon tracks (small data size) is needed to accomplish calibration requirements. The ideal place to get data suitable for muon detector calibration is the second level trigger, where the pre-s ... More
Presented by Dr. Enrico PASQUALUCCI on 15 Feb 2006 at 09:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. David AXMARK on 14 Feb 2006 at 11:30
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
Fermilab is a high energy physics research lab that maintains a dynamic network which typically supports around 10,000 active nodes. Due to the open nature of the scientific research conducted at FNAL, the portion of the network used to support open scientific research requires high bandwidth connectivity to numerous collaborating institutions around the world, and must facilitate convenient ... More
Presented by Igor MANDRICHENKO on 14 Feb 2006 at 14:40
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
The ATLAS experiment will rely on Ethernet networks for several purposes. A control network will provide infrastructure services and will also handle the traffic associated with control and monitoring of trigger and data acquisition (TDAQ) applications. Two independent data networks (dedicated TDAQ networks) will be used exclusively for transferring the event data within the High Level Trigge ... More
Presented by Dr. Stefan STANCU on 15 Feb 2006 at 16:40
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
LHC experiments obtain needed mathematical and statistical computational methods via the coherent set of C++ libraries provided by the Math work package of the ROOT project. We present recent developments of this work package, formed from the merge of the ROOT and SEAL activities: (1) MathCore, a new core library, has been developed as a self contained component encompassing basic mathemat ... More
Presented by Dr. Lorenzo MONETA on 14 Feb 2006 at 16:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Lalitesh KATHRAGADDA on 17 Feb 2006 at 10:15
Type: poster Session: Poster
Track: Event processing applications
The extension of Geant4 simulation capabilities down to the electronvolt scale is required for precision studies of radiation effects on electronics and detector components, and for micro-/nano-dosimetry studies in various experimental environments. A project is in progress to extend the coverage of Geant4 physics to this energy range. The complexity of the problem domain is discussed - s ... More
Presented by Dr. Maria Grazia PIA, Dr. Riccardo CAPRA, Ziad FRANCIS, Dr. Sebastien INCERTI, Dr. Barbara MASCIALINO, Prof. Gerard MONTAROU, Prof. Philippe MORETTO, Dr. Petteri NIEMINEN on 15 Feb 2006 at 09:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
HEP experiments have generally complex geometries that have to be represented and modelled for several purposes. The most important are simulation and reconstruction, where people generally do rely on some "ideal" geometry representation that is modelled within the simulation framework. The problem that the "real" experiment geometry contains perturbations to this "perfectly aligned" model tha ... More
Presented by Rene BRUN on 15 Feb 2006 at 14:00
Type: oral presentation Track: Plenary
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The Parallel ROOT Facility, PROOF, allows one to analyze and understand very large data sets on an interactive time scale. It makes use of the inherent parallelism in event data and implements an architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. We will present our experiences in using a very large PROOF cluster in production for the PHOBOS ... More
Presented by Dr. Maarten BALLINTIJN
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Beat JOST on 14 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
GridSite provides a Web Service hosting framework for services written as native executables (eg in C/C++) or scripting languages (such as Perl and Python.) These languages are of particular relevance to HEP applications, which typically have large investments of code and expertise in C++ and scripting languages. We describe the Grid-based authentication and authorization environment that G ... More
Presented by Dr. Andrew MCNAB on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Software Components and Libraries
BETACOOL program developed by JINR electron cooling group is a kit of algorithms based on common format of input and output files. The program is oriented to simulation of the ion beam dynamics in a storage ring in presence of cooling and heating effects. The version presented in this report includes three basic algorithms: simulation of r.m.s. parameters of the ion distribution function evol ... More
Presented by Dr. Grigory TRUBNIKOV on 13 Feb 2006 at 11:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
ATLAS is one of the four experiments under construction along the Large Hadron Collider ring at CERN. During the last few years much effort has gone in carrying out test beam sessions that allowed to assess the performance of ATLAS sub-detectors. During the data taking we have started the development of an histogram display application designed to satisfy the needs of all ATLAS sub-detectors g ... More
Presented by Dr. Andrea DOTTI on 16 Feb 2006 at 14:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The increasing instantaneous luminosity of the Tevatron collider will cause the computing requirements for data analysis and MC production to grow larger than the dedicated CPU resources that will be available. In order to meet future demands, CDF is investing in shared, Grid, resources. A significant fraction of opportunistic Grid resources will be available to CDF before the LHC era starts a ... More
Presented by Matthew NORMAN on 13 Feb 2006 at 17:20
Type: poster Session: Poster
Track: Distributed Event production and processing
Abstract: We describe a set of Web Services, created to support scientists in performing distributed production tasks (e.g. Monte Carlo). The Web Services described in this paper provide a portal for scientists to execute different production workflows which can consist of many consecutive steps. The main design goal of the Web Services discussed is to provide controlled access for (multiple) ... More
Presented by Dr. Frank VAN LINGEN on 15 Feb 2006 at 09:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The PHENIX experiment took 2*10^9 CuCu events and more than 7*10^9 pp events during Run5 of RHIC. The total stored raw data volume was close to 500 TB. Since our DAQ bandwidth allowed us to store all events selected by the low level triggers, we did not filter events with an online processor farm which we refer to as level 2 trigger. Instead we ran the level 2 triggers offline in the PHENIX ... More
Presented by Dr. Christopher PINKENBURG on 15 Feb 2006 at 16:40
Type: oral presentation Session: Online Computing
Track: Online Computing
PANDA is a universal detector system, which is being designed in the scope of the FAIR-Project at Darmstadt, Germany and is dedicated to high precision measurements of hadronic systems in the charm quark mass region. At the HESR storage ring a beam of antiprotons will interact with internal targets to achieve the desired luminosity of 2x10^32cm^-2s^-1. The experiment is designed for event rate ... More
Presented by Mr. Sebastian NEUBERT on 14 Feb 2006 at 14:25
Session: Plenary
Track: Plenary
on 13 Feb 2006 at 09:30
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Forschungszentrum Karlsruhe is one of the largest science and engineering research institutions in Europe. The resource centre GridKa as part of this science centre is building up a Tier 1 centre for the LHC project. Embedded in the European grid initiative EGEE, GridKa also manages the ROC (regional operation centre) for the German Swiss region. The management structure of the ROC and its int ... More
Presented by Dr. Sven HERMANN on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The operation and management of a heterogeneous large-scale, multi-purpose computer cluster is a complex task given the competing nature of requests for resources by a large, world-wide user base. Besides providing the bulk of the computational resources to experiments at the Relativistic Heavy-Ion Collider (RHIC), this large cluster is part of the U.S. Tier 1 Computing Center for the ATLAS ex ... More
Presented by Dr. Tony CHAN on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
In the last few decades operations research has made dramatic progress in providing efficient algorithms and fast software implementations to solve practical problems related to a wide range of disciplines, from logistics to finance, from political sciences to digital image analysis. After a brief introduction to the most used techniques, such as linear and mixed-integer programming, I wi ... More
Presented by Dr. Alberto DE MIN on 14 Feb 2006 at 17:20
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Moving from a National Grid Testbed to a Production quality Grid service for the HEP applications requires an effective operations structure and organization, proper user and operations support, flexible and efficient management and monitoring tools. Moreover the middleware releases should be easily deployable using flexible configuration tools, suitable for various and different local comput ... More
Presented by Dr. Maria Cristina VISTOLI on 14 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Software Components and Libraries
Efficient and friendly access to the large amount of data distributed over the wide area network is a challenge for the near future LCG experiments. The problem can be solved using current standard open technologies and tools. A JDBC standard solution has been chosen as a base for a comprehensive system for the relational data access and management. Widely available open tools have been reuse ... More
Presented by Dr. Julius HRIVNAC on 15 Feb 2006 at 09:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
ATLAS is one of the largest collaborations ever attempted in the physical sciences. This paper explains how the software infrastructure is organized to manage collaborative code development by around 200 developers with varying degrees of expertise, situated in 30 different countries. We will describe how succeeding releases of the software are built, validated and subsequently deployed to rem ... More
Presented by Dr. Frederick LUEHRING on 14 Feb 2006 at 16:00
Type: poster Session: Poster
Track: Online Computing
For any large experiment with multiple sub-systems and their respective experts spread throughout the world, real-time and near-real-time information accessible to a wide audience is critical to efficiency and success. Large and varied amounts of information about the current and past state of facilities and detector systems are necessary, both for current running, and for eventual data analys ... More
Presented by Mr. Wayne BETTS on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
Many computing farms use as a local batch system management PBSPro or its free version OpenPBS, respectively Torque and Maui products. These packages are delivered with graphical tools for a status overview, but summary and detailed reports from accounting log files are not available. This poster describes set of tools we are using for an overview of resources consumption in a last few hours a ... More
Presented by Dr. Jiri CHUDOBA on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Software Components and Libraries
The LCG POOL project has recently moved its focus on the developments of storage back-ends based on Relational Databases. Following the requirements of the LHC experiments, POOL has developed a framework for object persistency into relational schemas. This presentation will describe the main functionality of the package, explaining how the mechanism provided by POOL allows to efficiently s ... More
Presented by Dr. Giacomo GOVI on 15 Feb 2006 at 09:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The Parallel ROOT Facility, PROOF, enables the interactive analysis of distributed data sets in a transparent way. It exploits the inherent parallelism in data of uncorrelated events via a multi-tier architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. Being part of the ROOT framework PROOF inherits the benefits of a performant object-oriented ... More
Presented by Gerardo GANIS on 13 Feb 2006 at 15:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
A new offline processing system for production and analysis, Panda, has been developed for the ATLAS experiment and deployed in OSG. ATLAS will accrue tens of petabytes of data per year, and the Panda design is accordingly optimized for data intensive processing. Its development followed three years of production experience, the lessons from which drove a markedly different design for the new ... More
Presented by Prof. Kaushik DE on 15 Feb 2006 at 15:00
Type: oral presentation Session: Panel Discussion on Digital Divide
Track: Plenary
Presented by Dr. S. RAMAKRISHNAN, Prof. Harvey B NEWMAN, Viatcheslav ILIN, Alberto SANTORO, Dr. D. P. S. SETH, Prof. A. S. KOLASKAR on 17 Feb 2006 at 17:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
The silicon system of the ATLAS Inner Detector consists of about 6000 modules in its Semiconductor Tracker and Pixel Detector. Therefore, the offline global fit alignment algorithm has to deal with solving a problem of up to 36000 degrees of freedom.32-bit single-CPU platforms were foreseen to be unable to handle such large-size operations needed by the algorithm. The proposed solution is to u ... More
Presented by Dr. Muge KARAGOZ UNEL on 15 Feb 2006 at 14:20
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
The computing models for HEP experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage technology investments). To support such computing models, the network and end systems (computing and storage) face unprecedented challenges. ... More
Presented by Dr. Wenji WU on 14 Feb 2006 at 14:20
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
When the BaBar experiment transitioned to using the Root Framework s new data server architecture, xrootd, was developed to address event analysis needs. This architecture was deployed at SLAC two years ago and since then has also been deployed at other BaBar Tier 1 sites: IN2P3, INFN, FZK, and RAL; as well as other non-BaBar sites: CERN (Alice), BNL (Star), and Cornell (CLEO). As part of the ... More
Presented by Andrew HANUSHEVSKY on 15 Feb 2006 at 17:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
Distributed data management at LHC scales is a staggering task, accompanied by equally challenging practical management issues with storage systems and wide-area networks. CMS data transfer management system, PhEDEx, is designed to handle this task with minimum operator effort, automating the workflows from large scale distribution of HEP experiment datasets down to reliable and scalable trans ... More
Presented by Jens REHN on 16 Feb 2006 at 14:00
Type: oral presentation Session: Online Computing
Track: Online Computing
Physical study is the base of the hardware designs of the BES3 trigger system. It includes detector simulations, generation and optimization of the sub-detectors’ trigger conditions, main trigger simulations (Combining the trigger conditions from different detectors to find out the trigger efficiencies of the physical events and the rejection factors of the backgrounds events.) and hardw ... More
Presented by Dr. Da-Peng JIN on 14 Feb 2006 at 16:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The Physics and Data Quality Monitoring framework (DQM) aims at providing a homogeneous monitoring environment across various applications related to data taking at the CMS experiment. Initially developed as a monitoring application for the 1000 dual-CPU box (High-Level) Trigger Farm, it quickly expanded its scope to accommodate different groups across the experiment. The DQM organizes the inf ... More
Presented by Dr. Christos LEONIDOPOULOS on 15 Feb 2006 at 15:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
The offline and high-level trigger software for the ATLAS experiment has now fully migrated to a scheme which allows large tasks to be broken down into many functionally independent components. These components can focus, for example, on conditions or physics data access, on purely mathematical or combinatorial algorithms or on providing detector-specific geometry and calibration information. ... More
Presented by Wim LAVRIJSEN on 14 Feb 2006 at 16:20
Type: oral presentation Session: Online Computing
Track: Online Computing
The Trigger and Data Acquisition System of the ATLAS experiment is currently being installed at CERN. A significant amount of computing resources will be deployed in the Online computing system, in the close proximity of the ATLAS detector. More than 3000 high-performance computers will be supported by networks composed of about 200 Ethernet switches. The architecture of the networks was optim ... More
Presented by Dr. Catalin MEIROSU on 16 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Event processing applications
The Muon Spectrometer for the Atlas experiment at the LHC is designed to identify muons with transverse momentum greater than 3 GeV/c and measure muon momenta with high precision up to the highest momenta expected at the LHC. The 50-micron sagitta resolution translates into a transverse momentum resolution of 10% for muon transverse momenta of 1 TeV/c. Precise tracking is provided by Monitored ... More
Presented by Stephane WILLOCQ
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Randall SOBIE on 17 Feb 2006 at 10:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
High energy and nuclear physics applications on computational grids require efficient access to terabytes of data managed in relational databases. Databases also play a critical role in grid middleware: file catalogues, monitoring, etc. Crosscutting the computational grid infrastructure, a hyperinfrastructure of the databases emerges. The Database Access for Secure Hyperinfrastructure (DASH) ... More
Presented by Dr. Alexandre VANIACHINE on 14 Feb 2006 at 17:20
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
A typical HEP analysis in the LHC experiments involves the processing of data corresponding to several million events, terabytes of information, to be analysed in the last phases. Currently, processing one million events in a single modern workstation takes several hours, thus slowing the analysis cycle. The desirable computing model for a physicist would be closer to a High Performance Comput ... More
Presented by Dr. Isidro GONZALEZ CABALLERO on 14 Feb 2006 at 16:20
Type: poster Session: Poster
Track: Distributed Event production and processing
The Swiss ATLAS Computing prototype consists of clusters of PCs located at the universities of Bern and Geneva (Tier 3) and at the Swiss National Supercomputing Centre (CSCS) in Manno (Tier 2). In terms of software, the prototype includes ATLAS off-line releases as well as middleware for running the ATLAS off-line in a distributed way. Both batch and interactive use cases are supported. The ba ... More
Presented by Dr. Szymon GADOMSKI on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
Protégé is a free, open source ontology editor and knowledge-base framework developed at Stanford University (http://protege.stanford.edu/). The application is based on Java, is extensible, and provides a foundation for customized knowledge-based and Semantic Web applications. Protégé supports Frames, XML Schema, RDF(S), and OWL. It provides a "plug and play environment" that makes it a fl ... More
Presented by Bebo WHITE on 13 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Distributed Event production and processing
The production and analysis frameworks for LHC experiments are demanding advanced features in the middleware functionality and a complete integration with the experiment specific software environment. They also require an effective and distributed test platform where the integrated middleware functionality is verified and certified. The deployment in a production infrastructure of such soluti ... More
Presented by Maria Cristina VISTOLI on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Event processing applications
Projects like SETI@home use computing resources donated by the general public for scientific purposes. Many of these projects are based on the BOINC (Berkeley Open Interface for Network Computing) software framework that makes it easier to set up new public resource computing projects. BOINC is used at CERN for the LHC@home project where more than 10000 home users donate time of their CPUs to ... More
Presented by Dr. Jukka KLEM on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
Public resource computing uses the computing power of personal computers that belong to the general public. LHC@home is a public-resource computing project based on the BOINC (Berkeley Open Interface for Network Computing) platform. BOINC is an open source software system, developed by the team behind SETI@home, that provides the infrastructure to operate a public-resource computing project an ... More
Presented by Dr. Jukka KLEM on 13 Feb 2006 at 14:20
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
The future of computing for HENP applications depends increasingly on how well the global community is connected. With South Asia and Africa accounting for about 36% of the world’s population, the issues of internet/network facilities are a major concern for these regions if they are to successfully partake in scientific endeavors. However, not only is the International bandwidth for these r ... More
Presented by Dr. Roger COTTRELL on 13 Feb 2006 at 16:20
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
This talk presents a new approach of writing analysis frameworks. We will point out a way of generating analysis frameworks out of a short experiment description. The generation process is completely experiment independent and can thus be applied to any event based analysis. The presentation will focus on a software package called ROME. This software generates analysis frameworks which are ... More
Presented by Mr. Matthias SCHNEEBELI on 13 Feb 2006 at 17:00
Type: poster Session: Poster
Track: Software Components and Libraries
ROOT 2D graphics offers a wide set of data representation and visualisation techniques. Over the years, responding to user comments and requests, these have been improved and enriched. The current system is very flexible and can easily be tuned to meet user imagination. We present a patchwork demonstrating the wide variety of output which can be produced.
Presented by Rene BRUN on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
We present an overview of the common viewer architecture (TVirtualViewer3D interface and TBuffer3D shape hierarchy) used by all 3D viewers. This ensures clients of the viewers are decoupled from the viewers, and free of specific drawing code. We detail progress on new OpenGL viewer - the primary development focus, including architecture (publish 'on demand' model, caching, native shapes, geo ... More
Presented by Rene BRUN on 15 Feb 2006 at 16:00
Type: poster Session: Poster
Track: Software Components and Libraries
Overview and examples of: -Common viewer architecture (TVirtualViewer3D interface and TBuffer3D shape hierarchy) used by all 3D viewers. -Significant features in the OpenGL viewer - in pad embedding, render styles, composite (CSG/Boolean) shapes and clipping.
Presented by Rene BRUN on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
ROOT as a scientific data analysis framework provides a large selection data presentation objects and utilities. The graphical capabilities of ROOT range from 2D primitives to various plots, histograms, and 3D graphical objects. Its object- oriented design and developments offer considerable benefits for developing object- oriented user interfaces. The ROOT GUI classes support an extensive a ... More
Presented by Fons RADEMAKERS, Fons Rademakers on 13 Feb 2006 at 16:20
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
ROOT already has powerful and flexible I/O, which potentially can be used for storage of objects data in SQL databases. Usage of ROOT I/O together with SQL database will provide advanced functionality like: guarantee of data integrity, logging of data changes, possibility to rollback changes and lot of other features, provided by modern databases. At the same time data representation in SQ ... More
Presented by Dr. Sergey LINEV on 13 Feb 2006 at 17:40
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Rene BRUN on 15 Feb 2006 at 11:45
Type: poster Session: Poster
Track: Software Components and Libraries
Reflex is a package, which enhances C++ with reflection capabilities. It was developed in the LCG Applications Area at CERN and recently it was decided that it will be tightly integrated with the ROOT analysis framework and especially with the CINT interpreter. This strategy will unify the dictionary systems of ROOT/CINT and Reflex into a common one. The advantages of this move for ROOT/CINT w ... More
Presented by Dr. Stefan ROISER on 13 Feb 2006 at 11:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
RecPack is a general reconstruction toolkit, which can be used as a base for any reconstruction program for a HEP detector. Its main functionalities are track finding, fitting, propagation and matching. Track fitting can be done either via conventional least squares methods or Kalman Filter techniques. The last, in conjunction with the matching package, allows simultaneous track finding and f ... More
Presented by Dr. Anselmo CERVERA VILLANUEVA on 15 Feb 2006 at 14:36
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Since version 4.01/03, we have continued to strenghten and improve the ROOT I/O system. In particular we extended and optimized support for all STL collections, including adding support for member-wise streaming. The handling of TTree objects was also improved by adding support for indexing of chains, for using bitmap algorithm to speed up search, and for accessing an sql table through the T ... More
Presented by Mr. Philippe CANAL on 13 Feb 2006 at 16:00
Type: poster Session: Poster
Track: Software Tools and Information Systems
Providing all components and designing good user interfaces requires from developers to know and apply some basic principles. The different parts of the ROOT GUIs should fit and complete each other. They must form a window via which users see the capability of the software system and understand how to use them. If well-designed, the user interface adds quality and inspires confidence and t ... More
Presented by Mr. Fons RADEMAKERS on 13 Feb 2006 at 11:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The Geometry modeler is a key component of the Geant4 tookit. It has been designed to exploit at the best the features provided by the Geant4 simulation toolkit, allowing the description in a natural way of the geometrical structure of complex detectors, from a few up to the hundreds of thousands of volumes of the LHC experiments, as well as human phantoms for medical applications or devices a ... More
Presented by Dr. Gabriele COSMO on 13 Feb 2006 at 14:54
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The current status and the recent developments of Geant4 "Standard" electromagnetic package are presented. The design iteration of the package carried out for the last two years is completed. It provides model versus process structure of the code. The internal database of elements and materials based on the NIST databases is introduced inside the Geant4 toolkit as well. The focus of recent ... More
Presented by Dr. Michel MAIRE on 13 Feb 2006 at 15:12
Type: poster Session: Poster
Track: Event processing applications
The LHCb experiment will make high precision studies of CP violation and other rare phenomena in B meson decays. Particle identification, in the momentum range from ~2-100 GeV/c, is essential for this physics programme, and will be provided by two Ring Imaging Cherenkov (RICH) detectors. The experiment will use several levels of trigger to reduce the 10MHz rate of visible interactions to the 2 ... More
Presented by Cristina LAZZERONI, Dr. Raluca-Anca MURESAN on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Reflection is the ability of a programming language to introspect and interact with it's own data structures at runtime without prior knowledge about them. Many recent languages (e.g. Java, Python) provide this ability inherently but it is lacking for C++. This paper will describe a software package, Reflex, which provides reflection capabilities for C++. Reflex was developed in the context of ... More
Presented by Dr. Stefan ROISER on 14 Feb 2006 at 14:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The STAR experiment at Brookhaven National Laboratory's Relativistic Heavy-Ion Collider (RHIC) has been accumulating 100's of millions events over its already 5 years running program. Within a growing Physics demand for statistics, STAR has more than doubled the events taken each year and is planning to increase its capability by an order of magnitude to reach billion events capabilities by 20 ... More
Presented by Mr. Michael DEPHILLIPS on 16 Feb 2006 at 14:20
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
dCache is a distributed storage system currently used to store and deliver data on a petabyte scale in several large HEP experiments. Initially dCache was designed as a disk front-end for robotic tape storage file systems. Lately, dCache systems have been increased in scale by several orders of magnitude and considered for deployment in US-CMS T2 centers lacking expensive tape robots. This nec ... More
Presented by Mr. Timur PERELMUTOV on 13 Feb 2006 at 11:00
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The ATLAS experiment uses a tiered data Grid architecture that enables possibly overlapping subsets, or replicas, of original datasets to be located across the ATLAS collaboration. Many individual elements of these datasets can also be recreated locally from scratch based on a limited number of inputs. We envision a time when a user will want to determine which is more expedient, downloading ... More
Presented by John HUTH on 13 Feb 2006 at 17:00
Type: poster Session: Poster
Track: Distributed Event production and processing
Commissioning of the ATLAS detector at the CERN Large Hadron Collider (LHC) includes, as partially overlapping phases, subsystem standalone work, integration of systems into the full detector, cosmics data taking, single beam running and finally first collisions. These tasks require services like DAQ with data recording to Tier0 and distributed data management, databases, histogramming a ... More
Presented by Hans VON DER SCHMITT, Rob MCPHERSON on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
SAMGrid presently relies on the centralized database for providing several services vital for the system operation. These services are all encapsulated in the SAMGrid Database Server, and include access to file metadata and replica catalogs, dataset and processing bookkeeping, as well as the runtime support for the SAMGrid station services. Access to the centralized database and DB Servers rep ... More
Presented by Dr. Sinisa VESELI on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
SAMGrid is a distributed (CORBA-based) HEP data handling system presently used by three running experiments at Fermilab: D0, CDF and MINOS. User access to the SAMGrid services is provided via Python and C++ client APIs, which handle the low-level CORBA calls. Although the use of SAMGrid API's is fairly straightforward and very well documented, in practice SAMGrid users are facing numerous inst ... More
Presented by Dr. Sinisa VESELI on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Grid computing is becoming a popular way of providing high performance computing for many data intensive, scientific applications. The execution of user applications must simultaneously satisfy both job execution constraints and system usage policies. The SPHINX middleware addresses both these issues. In this paper, we present performance results of SPHINX on Open Science Grid. The simulati ... More
Presented by Sanjay RANKA on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Software Tools and Information Systems
(For the SAMGrid Team) SQLBuilder's purpose is to translate selection criteria in a high-level form to SQL query statements. The internal design is intended to permit easy changes to the selection criteria available and to permit retargeting the specific dialect of SQL generated. The initial target language will be Oracle 9i SQL. The input language will be defined in a formal grammar and i ... More
Presented by Mr. Randolph J. HERBER on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Event processing applications
One of the world's largest time projection chambers (TPC) has been used at STAR for reconstruction of collisions at luminosities yielding thousands of piled-up background tracks resulting from few hundreds pp minBias background events or several heavy ion background events, respectively. The combination of TPC tracks and trigger detector data used for tagging of tracks are sufficient to dis ... More
Presented by Dr. Jan BALEWSKI on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Software Components and Libraries
ATLAS has deployed an inter-object association infrastructure that allows the experiment to track at the object level what data have been written and where, and to assign both object-level and process-level labels to identify data objects for later retrieval. This infrastructure provides the foundation for opportunistic run-time navigation to upstream data, and in principle supports both dynam ... More
Presented by Dr. David MALON on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Software Components and Libraries
The ATLAS event data model will almost certainly change over time. ATLAS must retain the ability to read both old and new data after such a change, regulate the introduction of such changes, minimize the need to run massive data conversion jobs when such changes are introduced, and maintain the machinery to support such data conversions when they are unavoidable. In database literature, such c ... More
Presented by Dr. Marcin NOWAK on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Software Tools and Information Systems
The idea of an application database server is not new. It is a key element in multi-tiered architectures and business application frameworks. We present here a paradigm of developing such an application server in a complete schema independent way. We introduce a Generic Query Object Layer (QOL) and set of Database/Query Objects (D/QO) as the key component of the multi-layer Application server ... More
Presented by Anzar AFAQ on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
ScotGrid is a distributed Tier-2 computing centre formed as a collaboration between the Universities of Durham, Edinburgh and Glasgow, as part of the UK's national particle physics grid, GridPP. This paper describes ScotGrid's current resources by institute and how these were configured to enable participation in the LCG service challenges. In addition, we outline future development plans for ... More
Presented by Dr. Philip CLARK on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Managing the temporal disk space used by jobs in a farm can be an operational issue. Efforts have been put on controlling this space by the batch scheduler to make sure the job will use at most the requested amount of space, and that this space is cleaned up after the end of the job. ScratchFS is a virtual file system that addresses this problem for grid as well as conventional jobs at the fil ... More
Presented by Mr. Leandro FRANCO on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Event processing applications
B tagging is an important tool for separating the LHC Higgs events with associated b jets from the Drell-Yan background. We extend standard neural network (NN) approach using multilayer perceptron in b tagging [1] to include self-organizing feature maps. We demonstrate the use of the self-organizing maps (SOM_PAK program package) and the learning vector quantization (LVQ_PAK). A background di ... More
Presented by Mr. Aatos HEIKKINEN on 15 Feb 2006 at 09:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
Monitoring radiation background is a crucial task for the operation of LHC experiments. A project is in progress at CERN for the optimisation of the radiation monitors for LHC experiments. A general, flexibly configurable simulation system based on Geant4, designed to assist the engineering optimisation of LHC radiation monitor detectors, is presented. Various detector packaging configurations ... More
Presented by Dr. Michael MOLL, Dr. Federico RAVOTTI, Dr. Maria Grazia PIA, Dr. Riccardo CAPRA, Dr. Barbara MASCIALINO, Dr. Maurice GLASER on 13 Feb 2006 at 16:54
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
Geant4 is a toolkit to simulate the passage of a particle through matter based on Monte Carlo method. Geant4 incorporates many of available experimental data and theoretical models over wide energy region, extending its application scope not only to high energy physics but also medical physics, astro-physics, etc. We have developed a simulation framework for heavy ion therapy system based on G ... More
Presented by Dr. Satoru KAMEOKA on 13 Feb 2006 at 17:12
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
This talk addresses two issues related to the implementation of a variable software description of the ATLAS detector. The first topic is how we implement an evolving description of an evolving ATLAS detector, including special configurations at varying levels of realism, in a way which plugs into the simulation and reconstruction software. The second topic is how time-dependent alignment i ... More
Presented by Vakhtang TSULAIA on 15 Feb 2006 at 15:12
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
At the end of 2004 CMS decided to redesign the software framework used for simulation and reconstruction. The new design includes a completely revisited event data model. This new software will be used in the first months of 2006 for the so called Magnet Test Cosmic Challenge (MTCC). The MTCC is a slice test in which a small fraction of all the CMS detection equipment is expected to be operate ... More
Presented by Dr. Giacomo BRUNO on 16 Feb 2006 at 14:18
Type: poster Session: Poster
Track: Event processing applications
The muon spectrometer of the ATLAS experiment aims at reconstructing very high energy muon tracks (up to 1 TeV) with a transverse momentum resolution better than 10 %. For this purpose a resolution of 50 micrometer on the sagitta of tracks has to be achieved. Each muon track is measured with three wire chambers stations placed inside an air core toroid magnet (the chambers seat around the ... More
Presented by Dr. Valerie GAUTARD on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Modern analysis of high energy physics (HEP) data needs advanced statistical tools to separate signal from background. A C++ package has been implemented to provide such tools for the HEP community. The package includes linear and quadratic discriminant analysis, decision trees, bump hunting (PRIM), boosting (AdaBoost), bagging and random forest algorithms, and interfaces to the feedforward ba ... More
Presented by Dr. Ilya NARSKY, Mr. Julian BUNN, Dr. Julian BUNN, Julian BUNN on 14 Feb 2006 at 17:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Jamie SHIERS on 13 Feb 2006 at 11:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Paris SPHICAS on 13 Feb 2006 at 11:30
Type: oral presentation Session: Online Computing
Track: Online Computing
DØ, one of two collider experiments at Fermilab's Tevatron, upgraded its DAQ system for the start of Run II. The run started in March 2001, and the DAQ system was fully operational shortly afterwards. The DAQ system is a fully networked system based on Single Board Computers (SBCs) located in VME readout crates which forward their data to a 250 node farm of commodity processors for trigger se ... More
Presented by Gordon WATTS on 14 Feb 2006 at 15:05
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Jos ENGELEN on 13 Feb 2006 at 10:00
Type: oral presentation Session: Online Computing
Track: Online Computing
This paper descibes an analysis and conceptual design for the steering of the ATLAS High Level Trigger (HLT). The steering is the framework that organises the event selection software. It implements the key event selection strategies of the ATLAS trigger, which are designed to minimise processing time and data transfers: reconstruction within regions of interest, menu-driven selection and fas ... More
Presented by Mr. Gianluca COMUNE on 14 Feb 2006 at 16:40
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
LHC analysis farms - present at sites collaborating with LHC experiments - have been used in the past for analyzing data coming from an experiment’s production center. With time such facilities were provided with high performance storage solutions in order to respond to the demand for big capacity and fast processing capabilities. Today, Storage Area Network solutions are commonly deployed a ... More
Presented by Luca MAGNONI, Riccardo ZAPPI on 15 Feb 2006 at 09:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
Following on from the LHC experiments’ computing Technical Design Reports, HEPiX, with the agreement of the LCG, formed a Storage Task Force. This group was to: examine the current LHC experiment computing models; attempt to determine the data volumes, access patterns and required data security for the various classes of data, as a function of Tier and of time; consider the current storage ... More
Presented by Dr. Roger JONES on 13 Feb 2006 at 17:00
Type: oral presentation Session: Online Computing
Track: Online Computing
ATLAS is one of the four experiments under construction along the Large Hadron Collider (LHC) ring at CERN. The LHC will produce interactions at a center of mass energy equal to $\sqrt s~=~14~TeV$ at a $40~MHz$ rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common ... More
Presented by Dr. Wainer VANDELLI on 15 Feb 2006 at 14:20
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The ATLAS experiment at LHC will start taking data in 2007. As preparative work, a full vertical slice of the final higher level trigger and data acquisition (TDAQ) chain, "the pre-series", has been installed in the ATLAS experimental zone. In the pre-series setup, detector data are received by the readout system and next partially analyzed by the second level trigger (LVL2). On acceptance ... More
Presented by Dr. gokhan UNEL on 13 Feb 2006 at 16:40
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Simon LIN, Dr. Simon LIN on 17 Feb 2006 at 14:50
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Beat JOST on 17 Feb 2006 at 11:15
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Fons RADEMAKERS on 17 Feb 2006 at 15:50
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Gavin MCCANCE on 17 Feb 2006 at 15:30
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Mr. Markus SCHULZ on 17 Feb 2006 at 15:10
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Lorenzo MONETA on 17 Feb 2006 at 11:55
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Andreas PFEIFFER on 17 Feb 2006 at 14:30
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Gabriele COSMO on 17 Feb 2006 at 11:35
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The enormity of data obtained in scientific experiments often necessitates a suitable graphical representation for analysis. Surface contour is one such graphical representation which renders a pictorial view that aids in easy data interpretation. It is essentially a two-dimensional visualization of a three-dimensional surface plot. Very recently, it has been shown that Super Heavy Elements ... More
Presented by Ms. Niranjani S on 15 Feb 2006 at 15:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
A project is in progress for a systematic, rigorous, quantitative validation of all Geant4 physics models against experimental data, to be collected in a Geant4 Physics Book. Due to the complexity of Geant4 hadronic physics, the validation of Geant4 hadronic models proceeds according to a bottom-up approach (i.e. from the lower energy range up to higher energies): this approach allows esta ... More
Presented by Dr. Giacomo CUTTONE, Dr. Francesco DI ROSA, Dr. Susanna GUATELLI, Dr. Aatos HEIKKINEN, Dr. Barbara MASCIALINO, Dr. Giorgio RUSSO, Dr. Maria Grazia PIA, Dr. Giuseppe Antonio Pablo CIRRONE on 13 Feb 2006 at 16:36
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
A working prototype portal for the LHC Computing Grid (LCG) is being customised for use by the T2K 280m Near Detector software group. This portal is capable of submitting jobs to the LCG and retrieving the output on behalf of the user. The T2K specific developement of the portal will create customised submission systems for the suites of production and analysis software being written by the ... More
Presented by Dr. Gidon MOONT on 14 Feb 2006 at 16:40
Type: poster Session: Poster
Track: Distributed Event production and processing
Distributed data management at LHC scales is a staggering task, accompanied by equally challenging practical management issues with storage systems and wide-area networks. CMS data transfer management system, PhEDEx, is designed to handle this task with minimum operator effort, automating the workflows from large scale distribution of HEP experiment datasets down to reliable and scalable trans ... More
Presented by Timothy Adam BARRASS on 15 Feb 2006 at 09:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
A DOE MICS/SciDac funded project, TeraPaths, deployed and prototyped the use of differentiated networking services based on a range of new transfer protocols to support the global movement of data in the high energy physics distributed computing environment. While this MPLS/LAN QoS work specifically targets networking issues at BNL, the experience acquired and expertise developed is expected ... More
Presented by Dr. Dimitrios KATRAMATOS, Dr. Dantong YU on 14 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
The purpose of the Teraport project is to provide computing and network infrastructure for a university-based, multi-disciplinary, Grid-enabled analysis platform with superior network connectivity to both domestic and international networks. The facility is configured and managed as part of larger Grid infrastructures, with specific focus on integration and interoperability with the TeraGrid ... More
Presented by Robert GARDNER on 15 Feb 2006 at 09:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
Ongoing research has shown that testing grid software is complex. Automated testing mechanisms seem to be widely used, but are critically discussed on account of their efficiency and correctness in finding errors. Especially when programming distributed collaborative systems, structures get complex and systems get more error-prone. Past projects done by the authors have shown that the most i ... More
Presented by Mr. Florian URMETZER on 14 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
A Directed Acyclic Graph (DAG) can be used to represent a set of programs where the input, output or execution of one or more programs is dependent on one or more other programs. We developed a basic test suite for DAG jobs. It consists of 2 main parts: a) functionality tests are using of CLI (in Perl). The generation of the DAG with arbitrary structure and different JDL-attributes for the ... More
Presented by Elena SLABOSPITSKAYA on 15 Feb 2006 at 09:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The Atlas Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT lxbatch facility provided the opportunity to run in July 2005 online functionality ... More
Presented by Mrs. Doris BURCKHART on 15 Feb 2006 at 14:40
Type: oral presentation Session: Online Computing
Track: Online Computing
The data-acquisition software framework DATE for the ALICE experiment at the LHC has evolved over a period of several years. The latest version DATE V5 is geared for deployment during the test and commissioning phase. The DATE software is designed to runs on several hundred machines being installed with Scientific Linux CERN (SLC) to handle the data streams of approximatly 400 optical Detector ... More
Presented by Klaus SCHOSSMAIER on 13 Feb 2006 at 14:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The ALICE Offline framework is now in its 8th year of development and is now close to be used for data taking. This talk will provide a short description of the history of AliRoot and then will describe the latest developments. The newly added alignment framework, based on the ROOT geometrical modeller will be described. The experience with the FLUKA MonteCarlo used for full detector simulatio ... More
Presented by Federico CARMINATI on 14 Feb 2006 at 16:00
Type: poster Session: Poster
Track: Distributed Event production and processing
The ALICE Computing Team has developed since 2001 a distributed computing environment implementing a Grid paradigm under the name of AliEn. With the evolution of the middleware provided by various large grid projects in Europe and in the US (EGEE, OSG, ARC), a number of services provided by AliEn are now provided and maintained by the corresponding Grid infrastructures. AliEn has therefore evo ... More
Presented by Predrag BUNCIC on 15 Feb 2006 at 09:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
We present the AMGA (ARDA Metadata Grid Application) metadata catalog, which is a part of the gLite middleware. AMGA provides a very lightweight metadata service as well as basic database access functionality on the Grid. Following a brief overview of the AMGA design, functionality, implementation and security features, we will show performance comparisons of AMGA with direct database access a ... More
Presented by Dr. Birger KOBLITZ on 14 Feb 2006 at 17:40
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The ATLAS Computing Model is under continuous development. Previous exercises focussed on the Tier-0/Tier-1 interactions, with an emphasis on the resource implications and only a high-level view of the data and workflow. The work presented here attempts to describe in some detail the data and control flow from the High Level Trigger farms all the way through to the physics user. The current fo ... More
Presented by Dr. Roger JONES on 13 Feb 2006 at 16:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The Trigger and Data Acquisition system (TDAQ) of the ATLAS experiment at the CERN Large Hadron Collider is based on a multi-level selection process and a hierarchical acquisition tree. The system, consisting of a combination of custom electronics and commercial products from the computing and telecommunication industry, is required to provide an online selection power of 105 and a total throu ... More
Presented by Dr. Benedetto GORINI on 14 Feb 2006 at 14:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The simulation program for the ATLAS experiment at CERN is currently in a full operational mode and integrated into the ATLAS’s common analysis framework, ATHENA. The OO approach, based on GEANT4, and in use during the DC2 data challenge has been interfaced within ATHENA and to GEANT4 using the LCG dictionaries and Python scripting. The robustness of the application was proved during the DC2 ... More
Presented by Prof. Adele RIMOLDI on 13 Feb 2006 at 17:30
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The event data model (EDM) of the ATLAS experiment is presented. For large collaborations like the ATLAS experiment common interfaces and data objects are a necessity to insure easy maintenance and coherence of the experiments software platform over a long period of time. The ATLAS EDM improves commonality across the detector subsystems and subgroups such as trigger, test beam reconstruction, ... More
Presented by Dr. Edward MOYSE on 14 Feb 2006 at 14:54
Type: oral presentation Session: Distributed Data Analysis
Track: Distributed Data Analysis
The ATLAS strategy follows a service oriented approach to provide Distributed Analysis capabilities to its users. Based on initial experiences with an Analysis service, the ATLAS production system has been evolved to support analysis jobs. As the ATLAS production system is based on several grid flavours (LCG, OSG and Nordugrid), analysis jobs will be supported by specific executors on the diff ... More
Presented by Dr. Dietrich LIKO on 14 Feb 2006 at 16:00
Type: poster Session: Poster
Track: Online Computing
The ATLAS experiment at the LHC proton-proton collider at CERN will be faced with several technological challenges. A three level trigger and data acquisition system has been designed to reduce the 40 MHz bunch-crossing frequency, corresponding to an interaction rate of 1GHz at the design instantaneous luminosity to about ~100 Hz allowed by the permanent storage system. The capability to sel ... More
Presented by Dr. Antonio SIDOTI on 15 Feb 2006 at 09:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The BESIII is a general-purpose experiment for studying electron-positron collision at BEPCII, which is currently under construction at IHEP, Beijing. The BESIII offline software system is built on the Gaudi architecture. This contribution describes the BESIII specific framework implementation for offline data processing and physics analysis. And we will also present the development status of ... More
Presented by Dr. Weidong LI on 14 Feb 2006 at 16:18
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
(For the CMS Collaboration) Since CHEP04 in Interlaken, the CMS experiment has developed a baseline Computing Model and a Technical Design for the computing system it expects to need in the first years of LHC running. Significant attention was focused on the development of a data model with heavy streaming at the level of the RAW data based on trigger physics selections. We expect that this ... More
Presented by Dr. Jose HERNANDEZ on 15 Feb 2006 at 17:40
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
Monte Carlo simulations are a critical component of physics analysis in a large HEP experiment such as CMS. The validation of the simulation sofware is therefore essencial to guarantee the quality and accuracy of the Monte Carlo samples. CMS is developing a Simulation Validation Suite (SVS) consisting of a set of packages associated with the different sub-detector systems: tracker, electromagn ... More
Presented by Dr. Victor Daniel ELVIRA on 14 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Event processing applications
The design goal of the CMS electromagnetic calorimeter is to reach an excellent energy resolution; several aspects concur to the fulfillment of this ambitious goal. An enormous quantity of hardware monitoring data will be available, together with a laser monitoring system that will be able to follow quasi on-line the change of transparency of the crystals due to radiation damage. This result i ... More
Presented by Dr. Paolo MERIDIANI on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Event processing applications
The event data model for the ATLAS calorimeters in the reconstruction software is described, starting from the raw data to the analysis domain calorimeter data. The data model includes important features like compression strategies with insignificant loss of signal precision, flexible and configurable data content for high level reconstruction objects, and backward navigation from the analysis ... More
Presented by Walter LAMPL on 13 Feb 2006 at 11:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
We describe the Capone workflow manager which was designed to work for Grid3 and the Open Science Grid. It has been used extensively to run ATLAS managed and user production jobs during the past year but has undergone major redesigns to improve reliablility and scalability as a result of lessons learned (cite Prod paper). This paper introduces the main features of the new system covering job ... More
Presented by Marco MAMBELLI on 14 Feb 2006 at 14:20
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
DESY operates some thousand computers, based on different operating systems. On Servers and workstations not only the operating systems but many centrally supported software systems are used. Most of these systems, operating and software systems come with their own user and account management tools. Typically they do not know of each other, which makes live harder for users, when you have to r ... More
Presented by Mr. Dirk JAHNKE-ZUMBUSCH on 16 Feb 2006 at 14:40
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
Releasing software for projects with large code bases is a challenging task. When developers are geographically dispersed, often in different time zones, coordination can be difficult. A successful release strategy is therefore paramount and clear guidelines for all the stages of software development are required. The CMS experiment recently started a major refactorization of its simulation, r ... More
Presented by Stefano ARGIRO on 14 Feb 2006 at 16:40
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The past decade has been an era of sometimes tumultuous change in the area of Computing for High Energy Physics. This talk addresses the evolution of databases in HEP, starting from the LEP era and the visions presented during the CHEP 92 panel "Databases for High Energy Physics" (D. Baden, B. Linder, R. Mount, J. Shiers). It then reviews the rise and fall of Object Databases as a "one size fi ... More
Presented by Dr. Jamie SHIERS on 13 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Monitoring a large-scale computing facility is evolving from a passive to a more active role in the LHC era, from monitoring the health, availability and performance of the facility to taking a more active and automated role in restoring availability, updating software and becoming a meta-scheduler for batch systems. This talk will discuss the experiences of the RHIC and ATLAS U.S. Tier 1 C ... More
Presented by Dr. Tony CHAN on 13 Feb 2006 at 11:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
The German Ministry for Education and Research announced a 100 million euro German e-science initiative focused on: Grid computing, e-learning and knowledge management. In a first phase started September 2005 the Ministry has made available 17 million euro for D-Grid, which currently comprises six research consortia: five community grids - HEP-Grid (high-energy physics), Astro-Grid(astronomy ... More
Presented by Dr. Peter MALZACHER on 13 Feb 2006 at 14:40
Type: poster Session: Poster
Track: Software Components and Libraries
Statistical methods play a significant role throughout the life-cycle of high energy physics experiments. Only a few basic tools for statistical analysis were available in the public domain FORTRAN libraries for high energy physics. Nowadays the situation is hardly unchanged even among the libraries of the new generation. The present project in progress develops an object-oriented software to ... More
Presented by Dr. Maria Grazia PIA, Dr. Barbara MASCIALINO, Dr. Andreas PFEIFFER, Dr. Alberto RIBON, Dr. Paolo VIARENGO on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
Continuing the UK's strong involvement with Grid computing, the GridPP2 project (2004--2007) has established a group to investigate the use of metadata within HEP Grid computing. Three posts (based at Glasgow) are dedicated to metadata, but the group includes others working for CERN, various LHC experiments, EGEE and further afield. An important aspect of the group's work is to provide a foru ... More
Presented by Dr. Paul MILLAR on 15 Feb 2006 at 09:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
We describe the purpose, architectural definition, deployment and operational processes for the Integration Testbed (ITB) of the Open Science Grid (OSG). The ITB has been successfully used to integrate a set of functional interfaces and services required for the OSG Deployment. Activity leading to two major deployments of the OSG grid infrastructure. We discuss the methods and logical archite ... More
Presented by Robert GARDNER on 13 Feb 2006 at 14:20
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
In the context of the LCG Applications Area the SPI, Software Process and Infrastructure, project provides several services to the users in the LCG projects and the experiments (mainly at the LHC). These services comprise the CERN Savannah bug-tracking service, the external software service, and services concerning configuration management and applications build, as well as software testing an ... More
Presented by Dr. Andreas PFEIFFER on 13 Feb 2006 at 17:20
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The LCG Service Challenges are aimed at achieving the goal of a production quality world-wide Grid that meets the requirements of the LHC experiments in terms of functionality and scale. This talk highlights the main goals of the Service Challenge programme, significant milestones as well as the key services that have been validated in production by the 4 LHC experiments. The LCG Service Ch ... More
Presented by Dr. Jamie SHIERS on 13 Feb 2006 at 14:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The H1 Experiment at HERA records electron-proton collisions provided by beam crossings of a frequency of 10 MHz. The detector has about half a million readout channels and the data acquisition allows to log about 25 events per second with a typical size of 100kB. The increased event rates after the upgrade of the HERA accelerator at DESY led to a more demanding usage of computing and stora ... More
Presented by Mr. Christoph WISSING on 14 Feb 2006 at 16:20
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Les ROBERTSON on 15 Feb 2006 at 09:30
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The LHCb alignment framework allows clients of the LHCb detector description software suite (DetDesc) to modify the position of components of the detector at run-time and see the changes propagated to all users of the detector geometry. DetDesc is used in the simulation, digitization and reconstruction phases of data processing and the alignment framework is available in all these stages. ... More
Presented by Dr. JUAN PALACIOS on 15 Feb 2006 at 16:54
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The LHCb Conditions Database (CondDB) project aims to provide the necessary tools to handle non-event time-varying data. The LCG project COOL provides a generic API to handle this type of data and an interface to it has been integrated into the LHCb framework Gaudi. The interface is based on the Persistency Service infrastructure of Gaudi, allowing the user to load it at run-time only if nee ... More
Presented by Marco CLEMENCIC on 13 Feb 2006 at 17:20
Type: oral presentation Session: Online Computing
Track: Online Computing
LHCb is one of the four experiments currently under construction at Cern's LHC accelerator. It is a single arm spectrometer designed to study CP violation the B-meson system with high precision. This paper will describe the LHCb online system, which consists of three sub-systems: - The Timing and Fast Control (TFC) system, responsible for distributing the clock and trigger decisions together ... More
Presented by Dr. Beat JOST on 13 Feb 2006 at 15:05
Type: oral presentation Session: Online Computing
Track: Online Computing
This paper introduces the Log Service, developed at CERN within the ATLAS TDAQ/DCS framework. This package remedies the long standing problem of attempting to direct messages to the standard output and/or error in diskless nodes with no terminal. The Log Service provides a centralized mechanism for archiving and retrieving qualified information (Log Messages) created by TDAQ applications (Log ... More
Presented by Dr. Benedetto GORINI on 15 Feb 2006 at 16:20
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
A new project for advanced simulation technology in radiotherapy was launched on Oct. 2003 with funding of JST (Japan Science and Technology Agency) in Japan. The project aim is to develop an ample set of simulation package for radiotherapy based on Geant4 in collaboration between Geant4 developers and medical users. They need much more computing power and strong security for accurate and high ... More
Presented by Go IWAI on 16 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
The Midwest U.S. ATLAS Tier2 facility being deployed jointly by the University of Chicago and Indiana University is described in terms of a set of functional capabilities and opertional provisions in support of ATLAS managed Monte Carlo production and distributed analysis of datasets by individual physicist-users. We describe a two-site shared systems administration model as well as the archit ... More
Presented by Robert GARDNER on 15 Feb 2006 at 09:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The new CMS Event Data Model and Framework that will be used for the high level trigger, reconstruction, simulation and analysis is presented. The new framework is centered around the concept of an Event. A data processing job is composed of a series of algorithms (e.g., a track finder or track fitter) that run in a particular order. The algorithms only communicate via data stored in the ... More
Presented by Dr. Christopher JONES on 14 Feb 2006 at 15:12
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
We report on the status and plans for the Open Science Grid Consortium, an open, shared national distributed facility in the US which supports a multi-discplinary suite of science applications. More than fifty University and Laboratory groups, including 2 in Brazil and 3 in Asia, now have their resources and services accessible to OSG. 16 Virtual Organizations have registered their users ... More
Presented by Frank WUERTHWEIN, Ruth PORDES, Mrs. Ruth PORDES on 13 Feb 2006 at 14:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
We have initiated a repository of tools, software, and technique documentation for techniques used in HEP and related physics disciplines, which are related to statistics. Fermilab is to assume custodial responsibility for the operation of this Phystat repository, which will be in the nature of an open archival repository. Submissions of appropriate packages, papers, modules and code fragmen ... More
Presented by Philippe CANAL on 14 Feb 2006 at 16:20
Type: poster Session: Poster
Track: Online Computing
The BESIII “readout” is meant an interface between DAQ framework and FEEs. As a part of DAQ system, the readout plays a very important role in the process of data acquisition. The principle functionality of Readout Crate is to receive, repack, buffer and forward the data coming from FEEs to Readout PC. The implementation is based on commercial components: VMEbus PowerPC based single board ... More
Presented by Mr. GUANGKUN LEI on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
The SAM-Grid system is an integrated data, job, and information management infrastructure. The SAM-Grid addresses the distributed computing needs of the experiments of RunII at Fermilab. The system typically relies on SAM-Grid services deployed at the remote facilities in order to manage the computing resources. Such deployment requires special agreements with each resource provider and it is ... More
Presented by Garzoglio GABRIELE on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Online Computing
We describe a new, high-speed trigger network for the STAR detector at RHIC to be used during the upcoming 2006 run and thereafter. The STAR Trigger Data Pusher (STP) replaces the off-the-shelf Myrinet network used in the STAR trigger system during the first five RHIC runs. The STP will lower latencies and increase bandwidth through the trigger system. Custom electronics provide flexibility in ... More
Presented by Mr. Chris PERKINS on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Event processing applications
We will report on a set of studies we have conducted to assess the feasibility of measuring the polarization of lambda_b hyperons in the CERN ATLAS experiment by making the first successful adaptation of the generation package EvtGen for polarized spin-1/2 particles. The simulations were based on the EvtGen version of ATLAS, a product of ATLAS EvtGen project, reported in other ATLAS abs ... More
Presented by Prof. Homer Alfred NEAL on 15 Feb 2006 at 09:00
Type: oral presentation Session: Distributed Event production and Processing
Track: Distributed Event production and processing
The roles of centralized and distributed storage at the RHIC/USATLAS Computing Facility have been undergoing a redefinition as the size and demands of computing resources continues to expand. Traditional NFS solutions, while simple to deploy and maintain, are marred by performance and scalability issues, whereas distributed software solutions such as PROOF and rootd are application specific, n ... More
Presented by Robert PETKUS on 13 Feb 2006 at 17:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
We describe an event visualization package in use in ATLAS. The package is based upon Open Inventor and its HEPVIs extensions. It is integrated into ATLAS's analysis framework, is modular and open to user extensions, co-displays the real detector description/simulation (GeoModel/GEANT) geometry together with event data, and renders in real time on regular laptop computers, using their availa ... More
Presented by Vakhtang TSULAIA on 15 Feb 2006 at 16:40
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Currently, grid development projects require end users to be authenticated under the auspices of a "recognized" organization, called a Virtual Organization (VO). A VO establishes resource-usage agreements with grid resource providers. The VO is responsible for authorizing its members and optionally assigning them to groups and roles within the VO. This enables fine-grained authorization at ... More
Presented by Mrs. Tanya LEVSHINA on 15 Feb 2006 at 17:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
The size and geographical diversity of the LHC collaborations present new challenges for communication and training. The Web Lecture Archive Project (WLAP), a joint project between the University of Michigan and CERN Academic and Technical Training, has been involved in recording, archiving and disseminating physics lectures and software tutorials for CERN and the ATLAS Collaboration since 199 ... More
Presented by Mr. Jeremy HERR, Dr. Steven GOLDFARB on 15 Feb 2006 at 15:00
Type: poster Session: Poster
Track: Distributed Event production and processing
The HERA luminosity upgrade and enhancements of the detector have led to considerably increased demands on computing resources for the ZEUS experiment. In order to meet these higher requirements, the ZEUS computing model has been extended to support computations in the Grid environment. We show how to use the Grid services in the production system of a real experiment and point out the main ... More
Presented by Mr. Krzysztof WRONA on 15 Feb 2006 at 09:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers distribu ... More
Presented by Dr. marc DOBSON on 13 Feb 2006 at 14:45
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
In this paper we report on the lessons learned from the Middleware point of view while running the gLite File Transfer Service (FTS) on the LCG Service Challenge 3 setup. The FTS has been designed based on the experience gathered from the Radiant service used in Service Challenge 2, as well as the CMS Phedex transfer service. The first implementation of the FTS was put to use in the beginning ... More
Presented by Mr. Paolo BADINO on 14 Feb 2006 at 16:00
Type: oral presentation Track: Software Components and Libraries
As an active participant in the international C++ standardization effort, Fermilab has contributed significant expertise toward the analysis and design of a random-number facility suitable for incorporation into the forthcoming update to the C++ standard. A first version of this design has been promulgated as part of a recently-approved Technical Report issued by the C++ Working Group of the ... More
Presented by W. E. BROWN
Type: poster Session: Poster
Track: Online Computing
The ATLAS Level-1 Barrel system is devoted to identify muons crossing the two outer Resistive Plate Chambers stations of the Barrel spectrometer, passing a set of programmable pT thresholds, to find their position with a granularity of Delta EtaX Delta Phi=0.1X0.1, and to associate them to a specific bunch crossing number. The system sends this trigger information to the Central Trigger Proces ... More
Presented by Stefano VENEZIANO on 15 Feb 2006 at 09:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
Various systematic physics and detector performance studies with the ATLAS detector require very large event samples. To generate those samples, a fast simulation technique is used instead of the full detector simulation, which often takes too much effort in terms of computing time and storage space. The widely used ATLAS fast simulation program ATLFAST, however, is based on intial four moment ... More
Presented by Mr. Andreas SALZBURGER on 14 Feb 2006 at 14:18
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
The ongoing evolution from packet based networks to hybrid networks in research & education (R&E) networks or what are the fundamental reasons behind the growing gap between commercial and R&E networks As exemplified by the Internet2 HOPI initiative (http://networks.internet2.edu/hopi/), the new GEANT2 backbone (http://www.dante.net/server/show/nav.00100f00d) and projects such as Drago ... More
Presented by Mr. Olivier MARTIN
Type: poster Session: Poster
Track: Software Components and Libraries
Aiming to provide and support a coherent set of libraries, the mathematical functionality of the ROOT project has been reorganized following a merge of the ROOT and SEAL activities. Two new libraries, coded in C++, have been released in ROOT version 5: MathCore (basic functionality) and MathMore (functionality for advanced users). We present the structure and design of these new libraries, i ... More
Presented by Dr. Lorenzo MONETA on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
Florida International University (FIU), in collaboration with partners at Florida State University (FSU), the University of Florida (UF), and the California Institute of Technology (Caltech), in cooperation with the National Science Foundation, are creating and operating an interregional Grid-enabled Center for High-Energy Physics Research and Educational Outreach (CHEPREO) at FIU, encom ... More
Presented by Heidi ALVAREZ, Dr. Paul AVERY on 15 Feb 2006 at 09:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
In this presentation we will discuss the design and functioning of a new tool that runs the ATLAS High Level Trigger Software on Event Summary Data (ESD) files, the format for data analysis in the experiment. An example of how to implement a sequence of algorithms based on the electrons selection will be shown.
Presented by Dr. Cibran SANTAMARINA RIOS on 15 Feb 2006 at 14:40
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
The objective of the paper is to advance the research in component-based software development by including agent oriented software engineering techniques. Agent oriented Component-based software development is the next step after object-oriented programming that promises to overcome the problems, such as reusability and complexity that have not yet been solved adequately with object-oriented ... More
Presented by Mr. Deepak NARASIMHA on 13 Feb 2006 at 14:40
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
Modern tracking detectors are composed of a large number of modules assembled in a hierarchy of support structures. The sensor modules are assembled in ladders or petals. Ladders and petals in turn are assembled in cylindrical or disk-like layers and layers are assembled to make a complete tracking device. Sophisticated geometrical calibration is essential in these kind of detector systems in ... More
Presented by Mr. Tapio LAMPEN, Mr. Tapio LAMPEN on 15 Feb 2006 at 14:54
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
An overview of the online reconstruction algorithms for the ALICE Time Projection Chamber and Inner Tracking System is given. Both the tracking efficiency and the time performance of the algorithms are presented in details. The application of the tracking algorithms in possible high transverse momentum jet and open charm triggers is discussed.
Presented by Marian IVANOV on 14 Feb 2006 at 17:48
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
Tracks finding and fitting algorithm in ALICE barrel detectors, Time projection chamber (TPC), Inner Tracking System (ITS), Transition radiation detector (TRD) based on the Kalman-filtering are presented. The filtering algorithm is able to cope with non-Gaussian noise and ambiguous measurements in high-density environments. The approach have been implemented within the ALICE simulation/reconst ... More
Presented by Marian IVANOV on 15 Feb 2006 at 16:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
This talk presents new methods to address the problem of muon track identification in the monitored drift tube chambers (MDT) of the ATLAS Muon Spectrometer. Pattern recognition techniques, employed by the current reconstruction software suffer when exposed to the high background rates expected at the LHC. We propose new techniques, exploiting existing knowledge of the detector performance ... More
Presented by Mr. David PRIMOR on 15 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Distributed Event production and processing
The CDF software model was developed with dedicated resources in mind. One of the main assumptions is to have a large set of executables, shared libraries and configuration files on a shared file system. As CDF is moving toward a Grid model, this assumption is limiting the general physics analysis to only a small set of CDF friendly sites with the appropriate file system installed. ... More
Presented by Dr. Igor SFILIGOI on 13 Feb 2006 at 11:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
UltraLight is a collaboration of experimental physicists and network engineers whose purpose is to provide the network advances required to enable petabyte-scale analysis of globally distributed data. Current Grid-based infrastructures provide massive computing and storage resources, but are currently limited by their treatment of the network as an external, passive, and largely unmanaged res ... More
Presented by Richard CAVANAUGH on 13 Feb 2006 at 16:40
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
We will describe the networking details of NSF-funded UltraLight project and report on its status. The project’s goal is to meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused agenda. The UltraLight network is a hybrid packet- and circuit-switched network infrastructure employing both “ultrascale” proto ... More
Presented by Shawn MC KEE on 14 Feb 2006 at 17:00
Type: oral presentation Session: Online Computing
Track: Online Computing
The Belle experiment, which is a B-factory experiment at KEK in Japan, is currently taking data with a DAQ system based on FASTBUS readout, switchless event building and higher level trigger(HLT) farm. To cope with a higher trigger rate from the expected sizeable increase in the accelerator luminosity in coming years, the upgrade of the DAQ system is in progress. FASTBUS modules are being re ... More
Presented by Prof. Ryosuke ITOH on 13 Feb 2006 at 16:00
Type: oral presentation Session: Event Processing Applications
Track: Event processing applications
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, and some recent applications. From the point of view of hadronic physics, most of the effort is still in the fi ... More
Presented by Lawrence S. PINSKY on 13 Feb 2006 at 14:18
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
Numerical simulations of QCD formulated on the lattice (LQCD) require a huge amount of computational resources. Grid technologies can help to improve exploitation of these precious resources, e.g. by sharing the produced data on a global level. The International Lattice DataGrid (ILDG) has been founded to define the required standards needed for a grid infrastructure to be used for research on ... More
Presented by Dr. Dirk PLEITER on 16 Feb 2006 at 14:40
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
Huge requirements on computing resources have made it difficult to run Frameworks of some new HEP experiments on the users' personal workstations. Fortunately, new software technology allows us to give users back at least a bit of the user-friendliness they were used to in the past. A Java Analysis Studio (JAS) plugin has been developed, which accesses the Python API of the Atlas Offline Frame ... More
Presented by Dr. Julius HRIVNAC on 15 Feb 2006 at 17:00
Type: poster Session: Poster
Track: Software Components and Libraries
DØ is a traditional High Energy Physics collider experiment located at the Tevatron at Fermilab. Similar to recent past and most future experiments almost all computing work is done on Linux using standard open source tools like the gcc compiler, the make utility, and ROOT. I have been using the Microsoft platform for quite some time to develop physics tools and algorithms. Once developed cod ... More
Presented by Gordon WATTS on 15 Feb 2006 at 09:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
At GridKa an initial capacity of 1.5 PB online and 2 PB background storage is needed for the LHC start in 2007. Afterwards the capacity is expected to grow almost exponentially. No computing site will be able to keep this amount of data in online storage, hence a highly accessible tape connection is needed. This paper describes a high-performance connection of the online storage to an IBM Tivo ... More
Presented by Dr. Doris RESSMANN on 13 Feb 2006 at 14:20
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The STAR Collaboration is currently migrating its simulation software based on Geant3, to the Root-based Virtual Monte Carlo Framework. One critical component of the framework is the mechanism of the Geometry Description, which comprises both the geometry model as used in the application, and the external language that allows the users to define and maintain the detector configuration on the o ... More
Presented by Dr. Maxim POTEKHIN on 15 Feb 2006 at 14:40
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
The data production and analysis system of the BaBar Experiment has evolved through a series of changes from a day when the first data were taken in May 1999. The changes, in particular, have also involved persistent technologies used to store the event data as well as a number of related databases. This talk is about CDB - the distributed Conditions Database of the BaBar Experiment. The curre ... More
Presented by Dr. Douglas SMITH on 13 Feb 2006 at 17:00
Type: oral presentation Session: Software Components and Libraries
Track: Software Components and Libraries
This talk presents an overview of the main components of a unique set of tools, in use in the STAR experiment, born from the fusion of two advanced technologies: the ROOT framework and libraries and the Qt GUI and event handling package. Together, they allow creating software packages and help resolving complex data-analysis or visualization problems, enhance computer simulation or help dev ... More
Presented by Dr. Valeri FINE on 15 Feb 2006 at 16:20
Type: poster Session: Poster
Track: Event processing applications
We present an investigation to validate Geant4 [1] Bertini cascade nuclide production by proton- and neutron-induced reactions on various target elements [2]. The production of residual nuclides is calculated in the framework of an intra-nuclear cascade, pre-equilibrium, fission, and evaporation model [3]. A 132 CPU Opteron Linux cluster running the NPACI Rocks Cluster Distribution [4, 5] ba ... More
Presented by Aatos HEIKKINEN on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Software Components and Libraries
We want to do a short communication of a job done at LAL to visualize, within the OnX interactive environment, HEP geometries accessed through the VGM abstract interfaces. VGM and OnX had been presented at the Interlaken CHEP'04.
Presented by Mr. Laurent GARNIER on 13 Feb 2006 at 11:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
To satisfy the demands of data intensive grid applications it is necessary to move to far more synergetic relationships between applications and networks. The main objective of the VINCI project is to enable data intensive applications to efficiently use and coordinate shared, hybrid network resources, to improve the performance and throughput of global-scale grid systems, such as those used i ... More
Presented by Iosif LEGRAND on 16 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Online Computing
We describe a VLSI implementation based on FPGA of a new greedy algorithm for approximating minimum set covering in ad hoc wireless network applications. The implementation makes the algorithm suitable for embedded and real-time architectures.
Presented by Dr. paolo BRANCHINI on 13 Feb 2006 at 11:00
Type: poster Session: Poster
Track: Grid middleware and e-Infrastructure operation
With the development of the grid and the acquisition of large clusters to support major HEP experiments on the grid. Has triggered different requests One is from local physicist from the major VOs to have privileged access to their resources and the second is to support smaller groups that will never have access to this amount of resources. Unfortunately both these categories of users up don ... More
Presented by Dr. Alessandra FORTI on 15 Feb 2006 at 09:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
One problem in distributed computing is bringing together application developers and resource providers to ensure that applications work well on the resources provided. A layer of abstraction between resources and applications provides new possibilities in designing Grid solutions. This paper compares different virtualisation environments, among which are Xen (developed at the Uni ... More
Presented by Mr. Marcus HARDT on 15 Feb 2006 at 15:00
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
The CMS tracker has more than 50 millions channels organized in 16540 modules each one being a complete detector. Its monitoring requires the creation, analysis and storage of at least 4 histograms per module to be done every few minutes. The analysis of these plots will be done by computer programs that will check the data against some reference plots and send alarms to the operator in case o ... More
Presented by Mr. Giulio EULISSE on 13 Feb 2006 at 16:00
Type: oral presentation Session: Grid Middleware and e-Infrastructure Operation
Track: Grid middleware and e-Infrastructure operation
GridSite has extended the industry-standard Apache webserver for use within Grid projects, both by adding support for Grid security credentials such as GSI and VOMS, and with the GridHTTP protocol for bulk file transfer via HTTP. We describe how GridHTTP combines the security model of X.509/HTTPS with the performance of Apache, in local and wide area bulk transfer applications. GridSite also s ... More
Presented by Dr. Andrew MCNAB on 15 Feb 2006 at 17:20
Type: oral presentation Session: Plenary
Track: Plenary
Welcome by Director, TIFR Address by Governor, Maharashtra National Anthem
on 17 Feb 2006 at 12:40
Type: oral presentation Session: Software Tools and Information Systems
Track: Software Tools and Information Systems
ATLAS Trigger & DAQ software, with six Gbytes per release, will be installed in about two thousand machines in the final system. Already during the development phase, it is tested and debugged in various Linux clusters of different sizes and network topologies. For the distribution of the software across the network there are, at least, two possible aproaches: fixed routing points, and adaptiv ... More
Presented by Hegoi GARITAONANDIA ELEJABARRIETA on 14 Feb 2006 at 14:00
Type: poster Session: Poster
Track: Computing Facilities and Networking
Virtualization is a methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others. These techniques can be used to consolidate the workloads of several under-utilized server ... More
Presented by Mr. Francesco Maria TAURINO on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Distributed Data Analysis
XrdSec is the security framework developed in the context of the XROOTD project. It provides a high-level abstract security interface for client-server applications. Concrete implementations of the interface can be written for any security protocol as plugin libraries, where all technical details about the protocol are confined. Clients and server administrators can configure the system behavi ... More
Presented by Gerardo GANIS on 15 Feb 2006 at 09:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
apeNEXT is the latest generation of massively parallel machines optimized for simulating QCD formulated on a lattice (LQCD). In autumn 2005 the commissioning of several large-scale installations of apeNEXT started, which will provide a total of 15 TFlops of compute power. This fully custom designed computer has been developed by an European collaboration composed of groups from INFN (Italy), ... More
Presented by Dr. Dirk PLEITER on 14 Feb 2006 at 16:00
Type: poster Session: Poster
Track: Event processing applications
DØ, one of the collider detectors at Fermilab's Tevatron, depends on efficient and pure b-quark identification for much of its high-pT physics program. DØ currently has two algorithms, one based on impact parameter and the other on explicit reconstruction of the B hadrons decay vertex. A third, combined algorithm is under development. DØ certifies all of its b-quark tagging algorithms befor ... More
Presented by Gordon WATTS on 15 Feb 2006 at 09:00
Type: poster Session: Poster
Track: Online Computing
cMsg is a highly extensible open-source framework within which one can deploy multiple underlying interprocess communication systems. It is powerful enough to support asyncronous publish/subscribe communications as well as synchronous peer-to-peer communications. It further includes a proxy system whereby client requests are transported to a remote server that actually connects to the underl ... More
Presented by Mr. Elliott WOLIN on 13 Feb 2006 at 11:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
For the last two years, the dCache/SRM Storage Element has been successfully integrated into the LCG framework and is in heavy production at several dozens of sites, spanning a range from single host installations up to those with some hundreds of tera bytes of disk space, delivering more than 50 TByes per day to clients. Based on the permanent feedback from our users and the detailed reports ... More
Presented by Dr. Patrick FUHRMANN on 13 Feb 2006 at 15:00
Type: oral presentation Session: Plenary
Track: Plenary
Presented by Dr. Tony HEY on 14 Feb 2006 at 11:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
We introduce gPLAZMA (grid-aware PLuggable AuthoriZation MAnagement) Architecture. Our work is motivated by a need for fine-grain security (Role Based Access Control or RBAC) in Storage Systems, and utilizes VOMS extended X.509 certificate specification for defining extra attributes (FQANs), based on RFC 3281. Our implementation, the gPLAZMA module for dCache, introduces Storage Authorization ... More
Presented by Abhishek Singh RANA on 15 Feb 2006 at 17:00
Type: oral presentation Session: Computing Facilities and Networking
Track: Computing Facilities and Networking
The openlab, created three years ago at CERN, was a novel concept: to involve leading IT companies in the evaluation and the integration of cutting-edge technologies or services, focusing on potential solutions for the LCG. The novelty lay in the duration of the commitment (three years during which companies provided a mix of in-kind and in-cash contributions), the level of the contributions a ... More
Presented by Mr. Francois FLUCKIGER on 13 Feb 2006 at 17:20
Type: poster Session: Poster
Track: Distributed Event production and processing
Server clustering is an effective method in increasing the pool of resources available to applications. Many clustering mechanisms exist; each with its own strengths as well as weaknesses. This paper describes the mechanism used by xrootd to provide a uniform data access space consisting of an unbounded number of independent distributed servers. We show how the mechanism is especially effectiv ... More
Presented by Andrew HANUSHEVSKY on 15 Feb 2006 at 09:00