-
Wolfgang von Rueden (CERN)27/09/2004, 09:00
-
David Williams27/09/2004, 09:30"Where are your Wares" Computing in the broadest sense has a long history, and Babbage (1791-1871), Hollerith (1860-1929) Zuse (1910-1995), many other early pioneers, and the wartime code breakers, all made important breakthroughs. CERN was founded as the first valve-based digital computers were coming onto the market. I will consider 50 years of Computing at CERN from the...Go to contribution page
-
A. Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)27/09/2004, 10:00In support of the Tevatron physics program, the Run II experiments have developed computing models and hardware facilities to support data sets at the petabyte scale, currently corresponding to 500 pb-1 of data and over 2 years of production operations. The systems are complete from online data collection to user analysis, and make extensive use of central services and common solutions...Go to contribution page
-
N. KATAYAMA (KEK)27/09/2004, 11:00The Belle experiment operates at the KEKB accelerator, a high luminosity asymmetric energy e+ e- machine. KEKB has achieved the world highest luminosity of 1.39 times 10^34 cm-2s-1. Belle accumulates more than 1 million B Bbar pairs in one good day. This corresponds to about 1.2 TB of raw data per day. The amount of the raw and processed data accumulated so far exceeds 1.4 PB....Go to contribution page
-
P. ELMER (Princeton University)27/09/2004, 11:30The BaBar experiment at SLAC studies B-physics at the Upsilon(4S) resonance using the high-luminosity e+e- collider PEP-II at the Stanford Linear Accelerator Center (SLAC). Taking, processing and analyzing the very large data samples is a significant computing challenge. This presentation will describe the entire BaBar computing chain and illustrate the solutions chosen as well as...Go to contribution page
-
M. Purschke (Brookhaven National Laboratory)27/09/2004, 12:00The concepts and technologies applied in data acquisition systems have changed dramatically over the past 15 years. Generic DAQ components and standards such as CAMAC and VME have largely been replaced by dedicated FPGA and ASIC boards, and dedicated real-time operation systems like OS9 or VxWorks have given way to Linux- based trigger processor and event building farms. We have also...Go to contribution page
-
J. Nogiec (FERMI NATIONAL ACCELERATOR LABORATORY)27/09/2004, 14:00The paper describes a component-based framework for data stream processing that allows for configuration, tailoring, and run-time system reconfiguration. The systemโs architecture is based on a pipes and filters pattern, where data is passed through routes between components. Components process data and add, substitute, and/or remove named data items from a data stream. They can also...Go to contribution page
-
G. Cancio (CERN)27/09/2004, 14:00This paper describes the evolution of fabric management at CERN's T0/T1 Computing Center, from the selection and adoption of prototypes produced by the European DataGrid (EDG) project[1] to enhancements made to them. In the last year of the EDG project, developers and service managers have been working to understand and solve operational and scalability issues. CERN has adopted and...Go to contribution page
-
M. Branco (CERN)27/09/2004, 14:00As part of the ATLAS Data Challenges 2 (DC2), an automatic production system was introduced and with it a new data management component. The data management tools used for previous Data Challenges were built as separate components from the existing Grid middleware. These tools relied on a database of its own which acted as a replica catalog. With the extensive use of Grid technology...Go to contribution page
-
Dr P. Bartalini (CERN)27/09/2004, 14:00In the framework of the LCG Simulation Project, we present the Generator Services Sub-project, launched in 2003 under the oversight of the LHC Monte Carlo steering group (MC4LHC). The goal of the Generator Services Subproject is to guarantee the physics generator support for the LHC experiments. Work is divided into four work packages: Generator library; Storage, event interfaces and...Go to contribution page
-
T.M. Steinbeck (KIRCHHOFF INSTITUTE OF PHYSICS, RUPRECHT-KARLS-UNIVERSITY HEIDELBERG, for the Alice Collaboration)27/09/2004, 14:00The Alice High Level Trigger (HLT) is foreseen to consist of a cluster of 400 to 500 dual SMP PCs at the start-up of the experiment. It's input data rate can be up to 25GB/s. This has to be reduced to at most 1.2 GB/s before the data is sent to DAQ through event selection, filtering, and data compression. For these processing purposes, the data is passed through the cluster in...Go to contribution page
-
A. Ceseracciu (SLAC / INFN PADOVA)27/09/2004, 14:00Track 5 - Distributed Computing Systems and Experiencesoral presentationThe Event Reconstruction Control System of the BaBar experiment was redesigned in 2002, to satisfy the following major requirements: flexibility and scalability. Because of its very nature, this system is continuously maintained to implement the changing policies, typical of a complex, distributed production enviromnent. In 2003, a major revolution in the BaBar computing model, the...Go to contribution page
-
Tomasz WLODEK (BNL)27/09/2004, 14:20This presentation describes the experiences and the lessons learned by the RHIC/ATLAS Computing Facility (RACF) in building and managing its 2,700+ CPU (and growing) Linux Farm over the past 6+ years. We describe how hardware cost, end-user needs, infrastructure, footprint, hardware configuration, vendor selection, software support and other considerations have played a role in...Go to contribution page
-
Dr F. Beaudette (CERN)27/09/2004, 14:20An object-oriented FAst MOnte-Carlo Simulation (FAMOS) has recently been developed for CMS to allow rapid analyses of all final states envisioned at the LHC while keeping a high degree of accuracy for the detector material description and the related particle interactions. For example, the simulation of the material effects in the tracker layers includes charged particle energy loss by...Go to contribution page
-
M. Ernst (DESY)27/09/2004, 14:20The LHC needs to achieve reliable high performance access to vastly distributed storage resources across the network. USCMS has worked with Fermilab-CD and DESY-IT on a storage service that was deployed at several sites. It provides Grid access to heterogeneous mass storage systems and synchronization between them. It increases resiliency by insulating clients from storage and network...Go to contribution page
-
J. Andreeva (UC Riverside)27/09/2004, 14:20Track 5 - Distributed Computing Systems and Experiencesoral presentationOne of the goals of CMS Data Challenge in March-April 2004 (DC04) was to run reconstruction for sustained period at 25 Hz input rate with distribution of the produced data to CMS T1 centers for further analysis. The reconstruction was run at the T0 using CMS production software, of which the main components are RefDB (CMS Monte Carlo 'Reference Database' with Web interface) and McRunjob...Go to contribution page
-
M. Sutton (UNIVERSITY COLLEGE LONDON)27/09/2004, 14:20The architecture and performance of the ZEUS Global Track Trigger (GTT) are described. Data from the ZEUS silicon Micro Vertex detector's HELIX readout chips, corresponding to 200k channels, are digitized by 3 crates of ADCs and PowerPC VME board computers push cluster data for second level trigger processing and strip data for event building via Fast and GigaEthernet network...Go to contribution page
-
R. Chytracek (CERN)27/09/2004, 14:20This paper describes the component model that has been developed in the context of the LCG/SEAL project. This component model is an attempt to handle the increasing complexity in the current data processing applications of LHC experiments. In addition, it should facilitate the software re-use by the integration of software components from LCG and non-LCG into the experiment's...Go to contribution page
-
A. Di Mattia (INFN)27/09/2004, 14:40The Atlas Level-2 trigger provides a software-based event selection after the initial Level-1 hardware trigger. For the muon events, the selection is decomposed in a number of broad steps: first, the Muon Spectrometer data are processed to give physics quantities associated to the muon track (standalone features extraction) then, other detector data are used to refine the extracted...Go to contribution page
-
G. Battistoni (INFN Milano, Italy)27/09/2004, 14:40The FLUKA Monte Carlo transport code is being used for different applications in High Energy, Cosmic Ray and Accelerator Physics. Here we review some of the ongoing projects which are based on this simulation tool. In particular, as far as accelerator physics is concerned, we wish to summarize the work in progress for the LHC and the CNGS project. From the point of view of experimental...Go to contribution page
-
L. GOOSSENS (CERN)27/09/2004, 14:40Track 5 - Distributed Computing Systems and Experiencesoral presentationIn order to validate the Offline Computing Model and the complete software suite, ATLAS is running a series of Data Challenges (DC). The main goals of DC1 (July 2002 to April 2003) were the preparation and the deployment of the software required for the production of large event samples, and the production of those samples as a worldwide distributed activity. DC2 (May 2004 until...Go to contribution page
-
L. Lueking (FERMILAB)27/09/2004, 14:40A high performance system has been assembled using standard web components to deliver database information to a large number (thousands?) of broadly distributed clients. The CDF Experiment at Fermilab is building processing centers around the world imposing a high demand load on their database repository. For delivering read-only data, such as calibrations, trigger information and run...Go to contribution page
-
Don Petravick27/09/2004, 14:40As part of the DOE SciDAC "National Infrastructure for Lattice Gauge Computing" project, Fermilab builds and operates production clusters for lattice QCD simulations. We currently operate three clusters: a 128-node dual Xeon Myrinet cluster, a 128-node Pentium 4E Myrinet cluster, and a 32-node dual Xeon Infiniband cluster. We will discuss the operation of these systems and examine their...Go to contribution page
-
S. Roiser (CERN)27/09/2004, 14:40The C++ programming language has very limited capabilities for reflection information about its objects. In this paper a new reflection system will be presented, which allows complete introspection of C++ objects and has been developed in the context of the CERN/LCG/SEAL project in collaboration with the ROOT project. The reflection system consists of two different parts. The first...Go to contribution page
-
S. Pardi (DIPARTIMENTO DI MATEMATICA ED APPLICAZIONI "R.CACCIOPPOLI")27/09/2004, 15:00Track 5 - Distributed Computing Systems and Experiencesoral presentationThe standard procedures for the extraction of gravitational wave signals coming from coalescing binaries provided by the output signal of an interferometric antenna may require computing powers generally not available in a single computing centre or laboratory. A way to overcome this problem consists in using the computing power available in different places as a single geographically...Go to contribution page
-
M. Ye (INSTITUTE OF HIGH ENERGY PHYSICS, ACADEMIA SINICA)27/09/2004, 15:00This article introduces a Embedded Linux System based on vme series PowerPC as well as the base method on how to establish the system. The goal of the system is to build a test system of VMEbus device. It also can be used to setup the data acquisition and control system. Two types of compiler are provided by the developer system according to the features of the system and the...Go to contribution page
-
Dirk Duellmann27/09/2004, 15:00While there are differences among the LHC experiments in their views of the role of databases and their deployment, there is relatively widespread agreement on a number of principles: 1. Physics codes will need access to database-resident data. The need for database access is not confined to middleware and services: physics-related data will reside in databases. 2. ...Go to contribution page
-
W. LAVRIJSEN (LBNL)27/09/2004, 15:00Python is a flexible, powerful, high-level language with excellent interactive and introspective capabilities and a very clean syntax. As such it can be a very effective tool for driving physics analysis. Python is designed to be extensible in low-level C-like languages, and its use as a scientific steering language has become quite widespread. To this end, existing and...Go to contribution page
-
S. Thorn27/09/2004, 15:00ScotGrid is a prototype regional computing centre formed as a collaboration between the universities of Durham, Edinburgh and Glasgow as part of the UK's national particle physics grid, GridPP. We outline the resources available at the three core sites and our optimisation efforts for our user communities. We discuss the work which has been conducted in extending the centre to embrace new...Go to contribution page
-
L. Pinsky (UNIVERSITY OF HOUSTON)27/09/2004, 15:00The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus...Go to contribution page
-
J. Rodriguez (UNIVERSITY OF FLORIDA)27/09/2004, 15:20The High Energy Physics Group at the University of Florida is involved in a variety of projects ranging from High Energy Experiments at hadron and electron positron colliders to cutting edge computer science experiments focused on grid computing. In support of these activities members of the Florida group have developed and deployed a local computational facility which consists of...Go to contribution page
-
Victor SERBO (AIDA)27/09/2004, 15:20AIDA, Abstract Interfaces for Data Analysis, is a set of abstract interfaces for data analysis components: Histograms, Ntuples, Functions, Fitter, Plotter and other typical analysis categories. The interfaces are currently defined in Java, C++ and Python and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist), Java (Java Analysis Studio) and...Go to contribution page
-
O. Smirnova (Lund University, Sweden)27/09/2004, 15:20The NorduGrid middleware, ARC, has integrated support for querying and registering to Data Indexing services such as the Globus Replica Catalog and Globus Replica Location Server. This support allows one to use these Data Indexing services for for example brokering during job-submission, automatic registration of files and many other things. This integrated support is complemented by a...Go to contribution page
-
Dr J. Apostolakis (CERN)27/09/2004, 15:20Geant4 is relied upon in production for increasing number of HEP experiments and for applications in several other fields. Its capabilities continue to be extended, as its performance and modelling are enhanced. This presentation will give an overview of recent developments in diverse areas of the toolkit. These will include, amongst others, the optimisation for complex setups...Go to contribution page
-
P. Buncic (CERN)27/09/2004, 15:20Track 5 - Distributed Computing Systems and Experiencesoral presentationAliEn (ALICE Environment) is a Grid framework developed by the Alice Collaboration and used in production for almost 3 years. From the beginning, the system was constructed using Web Services and standard network protocols and Open Source components. The main thrust of the development was on the design and implementation of an open and modular architecture. A large part of the component...Go to contribution page
-
G. CHEN (COMPUTING CENTER,INSTITUTE OF HIGH ENERGY PHYSICS,CHINESE ACADEMY OF SCIENCES)27/09/2004, 15:20BES is an experiment on Beijing Electron-Positron Collider (BEPC). BES computing environment consists of PC/Linux cluster and mainly relies on the free software. OpenPBS and Ganglia are used as job schedule and monitor system. With helps from CERN IT Division, CASTOR was implemented as storage management system. BEPC is being upgraded and luminosity will increase one hundred times...Go to contribution page
-
H. Kornmayer (FORSCHUNGSZENTRUM KARLSRUHE (FZK))27/09/2004, 15:40Track 5 - Distributed Computing Systems and Experiencesoral presentationThe observation of high-energetic gamma-rays with ground based air cerenkov telescopes is one of the most exciting areas in modern astro particle physics. End of the year 2003 the MAGIC telescope started operation.The low energy threshold for gamma-rays together with different background sources leads to a considerable amount of data. The analysis will be done in different institutes...Go to contribution page
-
S. Canon (NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER)27/09/2004, 15:40Supporting multiple large collaborations on shared compute farms has typically resulted in divergent requirements from the users on the configuration of these farms. As the frameworks used by these collaborations are adapted to use Grids, this issue will likely have a significant impact on the effectiveness of Grids. To address these issues, a method was developed at Lawrence Berkeley...Go to contribution page
-
J-P. Baud (CERN)27/09/2004, 15:40LCG-2 is the collective name for the set of middleware released for use on the LHC Computing Grid in December 2003. This middleware, based on LCG-1, had already several improvements in the Data Management area. These included the introduction of the Grid File Access Library(GFAL), a POSIX-like I/O Interface, along with MSS integration via the Storage Resource...Go to contribution page
-
H. Essel (GSI)27/09/2004, 15:40The GSI online-offline analysis system Go4 is a ROOT based framework for medium energy ion- and nuclear physics experiments. Its main features are a multithreaded online mode with a non-blocking Qt GUI, and abstract user interface classes to set up the analysis process itself which is organised as a list of subsequent analysis steps. Each step has its own event objects and a processor...Go to contribution page
-
A. Ribon (CERN)27/09/2004, 15:40In the framework of the LCG Simulation Physics Validation Project, we present comparison studies between the GEANT4 and FLUKA shower packages and LHC sub-detector test-beam data. Emphasis is given to the response of LHC calorimeters to electrons, photons, muons and pions. Results of "simple-benchmark" studies, where the above simulation packages are compared to data from nuclear...Go to contribution page
-
H-J. Mathes (FORSCHUNGSZENTRUM KARLSRUHE, INSTITUT FรผR KERNPHYSIK)27/09/2004, 15:40S.Argiro`(1), A. Kopmann (2), O.Martineau (2), H.-J. Mathes (2) for the Pierre Auger Collaboration (1) INFN, Sezione Torino (2) Forschungszentrum Karlsruhe The Pierre Auger Observatory currently under construction in Argentina will investigate extensive air showers at energies above 10^18 eV. It consists of a ground array of 1600 Cherenkov water detectors and 24 fluorescence...Go to contribution page
-
S. Burke (Rutherford Appleton Laboratory)27/09/2004, 16:30Track 5 - Distributed Computing Systems and Experiencesoral presentationThe European DataGrid (EDG) project ran from 2001 to 2004, with the aim of producing middleware which could form the basis of a production Grid, and of running a testbed to demonstrate the middleware. HEP experiments (initially the four LHC experiments and subsequently BaBar and D0) were involved from the start in specifying requirements, and subsequently in evaluating the performance...Go to contribution page
-
I. Sourikova (BROOKHAVEN NATIONAL LABORATORY)27/09/2004, 16:30To benefit from substantial advancements in Open Source database technology and ease deployment and development concerns with Objectivity/DB, the Phenix experiment at RHIC is migrating its principal databases from Objectivity to a relational database management system (RDBMS). The challenge of designing a relational DB schema to store a wide variety of calibration classes was ...Go to contribution page
-
P. DeMar (FNAL)27/09/2004, 16:30Management of large site network such as FNAL LAN presents many technical and organizational challenges. This highly dynamic network consists of around 10 thousand network nodes. The nature of the activities FNAL is involved in and its computing policy require that the network remains as open as reasonably possible both in terms of connectivity to the outside networks and in with respect...Go to contribution page
-
G B. Barrand (CNRS / IN2P3 / LAL)27/09/2004, 16:30We want to present the status of this project. After quickly remembering the basic choices around GUI, visualization and scriptingm we would like to develop what had been done in order to have an AIDA-3.2.1 complient systen, to visualize Geant4 data (G4Lab module), to visualize ROOT data (Mangrove module), to have an hippodraw module and what had been done in order to run on MacOSX...Go to contribution page
-
Prof. V. Ivantchenko (CERN, ESA)27/09/2004, 16:30We will summarize the recent and current activities of the Geant4 working group responsible of the standard package of electromagnetic physics. The major recent activities include an design iteration in energy loss and multiple scattering domain providing "process versus models" approach, and development of the following physics models: multiple scattering, ultra relativistic muon...Go to contribution page
-
A. Hanushevsky (SLAC)27/09/2004, 16:30As the BaBar experiment shifted its computing model to a ROOT-based framework, we undertook the development of a high-performance file server as the basis for a fault-tolerant storage environment whose ultimate goal was to minimize job failures due to server failures. Capitalizing on our five years of experience with extending Objectivity's Advanced Multithreaded Server (AMS), elements...Go to contribution page
-
M. Schulz (CERN)27/09/2004, 16:50Track 5 - Distributed Computing Systems and Experiencesoral presentationLCG2 is a large scale production grid formed by more than 40 worldwide distributed sites. The aggregated number of CPUs exceeds 3000 several MSS systems are integrated in the system. Almost all sites form an independent administrative domain. On most of the larger sites the local computing resources have been integrated into the grid. The system has been used for large scale...Go to contribution page
-
J. VanWezel (FORSCHUNGZENTRUM KARLSRUHE)27/09/2004, 16:50The HEP experiments that use the regional center GridKa will handle large amounts of data. Traditional access methods via local disks or large network storage servers show limitations in size, throughput or data management flexibility. High speed interconnects like Fibre Channel, iSCSI or Infiniband as well as parallel file systems are becoming increasingly important in large cluster...Go to contribution page
-
M.G. Pia (INFN GENOVA)27/09/2004, 16:50Various experimental configurations - such as, for instance, some gaseous detectors, require a high precision simulation of electromagnetic physics processes, accounting not only for the primary interactions of particles with matter, but also capable of describing the secondary effects deriving from the de-excitation of atoms, where primary collisions may have created vacancies. The...Go to contribution page
-
E. Hjort (LAWRENCE BERKELEY LABORATORY)27/09/2004, 16:50The STAR experiment utilizes two major computing facilities for its data processing needs - the RCF at Brookhaven and the PDSF at LBNL/NERSC. The sharing of data between these facilities utilizes data grid services for file replication, and the deployment of these services was accomplished in conjunction with the Particle Physics Data Grid (PPDG). For STAR's 2004 run it will be...Go to contribution page
-
P. Calafiura (LBNL)27/09/2004, 16:50Athena is the Atlas Control Framework, based on the common Gaudi architecture, originally developed by LHCb. In 2004 two major production efforts, the Data Challenge 2 and the Combined Test-beam reconstruction and analysis were structured as Athena applications. To support the production work we have added new features to both Athena and Gaudi: an "Interval of Validity" service to manage...Go to contribution page
-
D. Winter (COLUMBIA UNIVERSITY)27/09/2004, 16:50The PHENIX detector consists of 14 detector subsystems. It is designed such that individual subsystems can be read out independently in parallel as well as a single unit. The DAQ used to read the detector is a highly-pipelined parallel system. Because PHENIX is interested in rare physics events, the DAQ is required to have a fast trigger, deep buffering, and very high bandwidth. The...Go to contribution page
-
O. Tatebe (GRID TECHNOLOGY RESEARCH CENTER, AIST)27/09/2004, 17:10Gfarm v2 is designed for facilitating reliable file sharing and high-performance distributed and parallel data computing in a Grid across administrative domains by providing a Grid file system. A Grid file system is a virtual file system that federates multiple file systems. It is possible to share files or data by mounting the virtual file system. This paper discusses the design...Go to contribution page
-
Dr T. Koi (SLAC)27/09/2004, 17:10The transportation of ions in matter is subject of much interest in not only high-energy ion-ion collider experiments such as RHIC and LHC but also many other field of science, engineering and medical applications. Geant4 is a tool kit for simulation of passage of particles through matter and its OO designs makes it easy to extend its capability for ion transports. To simulate ions...Go to contribution page
-
Ofer RIND27/09/2004, 17:10Providing Grid applications with effective access to large volumes of data residing on a multitude of storage systems with very different characteristics prompted the introduction of storage resource managers (SRM). Their purpose is to provide consistent and efficient wide-area access to storage resources unconstrained by their particular implementation (tape, large disk arrays,...Go to contribution page
-
F. Carminati (CERN)27/09/2004, 17:10The ALICE collaboration at the LHC is developing since 1998 an OO offline framework, written entirely in C++. In 2001 a GRID system (AliEn - ALICE Environment) has been added and successfully integrated with ROOT and the offline. The resulting combination allows ALICE to do most of the design of the detector and test the validity of its computing model by performing large scale Data...Go to contribution page
-
D Chapin (Brown University)27/09/2004, 17:10The DZERO Level 3 Trigger and Data Aquisition (L3DAQ) system has been running continuously since Spring 2002. DZERO is loacated at one of the two interaction points in the Fermilab Tevatron Collider. The L3DAQ moves front-end readout data from VME crates to a trigger processor farm. It is built upon a Cisco 6509 Ethernet switch, standard PCs, and commodity VME single board computers. We...Go to contribution page
-
R. Pordes (FERMILAB)27/09/2004, 17:10Track 5 - Distributed Computing Systems and Experiencesoral presentationThe U.S.LHC Tier-1 and Tier-2 laboratories and universities are developing production Grids to support LHC applications running across a worldwide Grid computing system. Together with partners in computer science, physics grid projects and running experiments, we will build a common national production grid infrastructure which is open in its architecture, implementation and use. The...Go to contribution page
-
27/09/2004, 17:30A version of the Bertini cascade model for hadronic interactions is part of the Geant4 toolkit, and may be used to simulate pion-, proton-, and neutron-induced reactions in nuclei. It is typically valid for incident energies of 10 GeV and below, making it especially useful for the simulation of hadronic calorimeters. In order to generate the intra-nuclear cascade, the code depends...Go to contribution page
-
I. Osborne (Northeastern University, Boston, USA)27/09/2004, 17:30We present a composite framework which exploits the advantages of the CMS data model and uses a novel approach for building CMS simulation, reconstruction, visualisation and future analysis applications. The framework exploits LCG SEAL and CMS COBRA plug-ins and extends the COBRA framework to pass communications between the GUI and event threads, using SEAL callbacks to navigate...Go to contribution page
-
C. CIOFFI (Oxford University)27/09/2004, 17:30The LHCb experiment needs to store all the information about the datasets and their processing history of recorded data resulting from particle collisions at the LHC collider at CERN as well as of simulated data. To achieve this functionality a design based on data warehousing techniques was chosen, where several user-services can be implemented and optimized individually without...Go to contribution page
-
Y. CHENG (COMPUTING CENTER,INSTITUTE OF HIGH ENERGY PHYSICS,CHINESE ACADEMY OF SCIENCES)27/09/2004, 17:30With the development of Linux and improvement of PC's performance, PC cluster used as high performance computing system is becoming much popular. The performance of I/O subsystem and cluster file system is critical to a high performance computing system. In this work the basic characteristics of cluster file systems and their performance are reviewed. The performance of four...Go to contribution page
-
M. ZUREK (CERN, IFJ KRAKOW)27/09/2004, 17:30The talk presents the experience gathered during the testbed administration (~100 PC and 15+ switches) for the ATLAS Experiment at CERN. It covers the techniques used to resolve the HW/SW conflicts, network related problems, automatic installation and configuration of the cluster nodes as well as system/service monitoring in the heterogeneous dynamically changing...Go to contribution page
-
S. Dasu (UNIVERSITY OF WISCONSIN)27/09/2004, 17:30Track 5 - Distributed Computing Systems and Experiencesoral presentationThe University of Wisconsin distributed computing research groups developed a software system called Condor for high throughput computing using commodity hardware. An adaptation of this software, Condor-G, is part of Globus grid computing toolkit. However, original Condor has additional features that allows building of an enterprise level grid. Several UW departments have Condor computing...Go to contribution page
-
A. Lyon (FERMI NATIONAL ACCELERATOR LABORATORY)27/09/2004, 17:50Track 5 - Distributed Computing Systems and Experiencesoral presentationThe SAMGrid team has recently refactored its test harness suite for greater flexibility and easier configuration. This makes possible more interesting applications of the test harness, for component tests, integration tests, and stress tests. We report on the architecture of the test harness and its recent application to stress tests of a new analysis cluster at Fermilab, to explore...Go to contribution page
-
T. Mkrtchyan (DESY)27/09/2004, 17:50After successful implementation and deployment of the dCache system over the last years, one of the additional required services, the namespace service, is faced additional and completely new requirements. Most of these are caused by scaling the system, the integration with Grid services and the need for redundant (high availability) configurations. The existing system, having only...Go to contribution page
-
M. Kosov (CERN)27/09/2004, 17:50Quark-gluon strings are usually fragmented on the light cone in hadrons (PITHIA, JETSET) or in small hadronic clusters which decay in hadrons (HERWIG). In both cases the transverse momentum distribution is parameterized as an unknown function. In CHIPS the colliding hadrons stretch Pomeron ladders to each other and, when the Pomeron ladders meet in the rapidity space, they create Quasmons...Go to contribution page
-
K. Nienartowicz (CERN)27/09/2004, 17:50Data management is one of the cornerstones in the distributed production computing environment that the EGEE project aims to provide for a European e-Science infrastructure. We have designed a set of services based on previous experience in other Grid projects, trying to address the requirements of our user communities. In this paper we summarize the most fundamental requirements and...Go to contribution page
-
T. DeYoung (UNIVERSITY OF MARYLAND)27/09/2004, 17:50IceCube is a cubic kilometer-scale neutrino telescope under construction at the South Pole. The minimalistic nature of the instrument poses several challenges for the software framework. Events occur at random times, and frequently overlap, requiring some modifications of the standard event-based processing paradigm. Computational requirements related to modeling the detector medium...Go to contribution page
-
M. Dobson (CERN)27/09/2004, 17:50The ATLAS collaboration had a Combined Beam Test from May until October 2004. Collection and analysis of data required integration of several software systems that are developed as prototypes for the ATLAS experiment, due to start in 2007. Eleven different detector technologies were integrated with the Data Acquisition system and were taking data synchronously. The DAQ was integrated...Go to contribution page
-
A. Shevel (STATE UNIVERSITY OF NEW YORK AT STONY BROOK)27/09/2004, 18:10Track 5 - Distributed Computing Systems and Experiencesoral presentationThe PHENIX collaboration records large volumes of data for each experimental run (now about 1/4 PB/year). Efficient and timely analysis of this data can benefit from a framework for distributed analysis via a growing number of remote computing facilities in the collaboration. The grid architecture has been, or is being deployed at most of these facilities. The experience being...Go to contribution page
-
G. unel (UNIVERSITY OF CALIFORNIA AT IRVINE AND CERN)27/09/2004, 18:10The ATLAS Trigger and DAQ system is designed to use the Region of Interest (RoI)mechanism to reduce the initial Level 1 trigger rate of 100 kHz down to about 3.3 kHz Event Building rate. The DataFlow component of the ATLAS TDAQ system is responsible for the reading of the detector specific electronics via 1600 point to point readout links, the collection and provision of RoI to the...Go to contribution page
-
R. Kennedy (FERMI NATIONAL ACCELERATOR LABORATORY)27/09/2004, 18:10SAMGrid is the shared data handling framework of the two large Fermilab Run II collider experiments: DZero and CDF. In production since 1999 at D0, and since mid-2004 at CDF, the SAMGrid framework has been adapted over time to accommodate a variety of storage solutions and configurations, as well as the differing data processing models of these two experiments. This has been...Go to contribution page
-
Dr P. Spentzouris (FERMI NATIONAL ACCELERATOR LABORATORY)27/09/2004, 18:10Computer simulations play a crucial role in both the design and operation of particle accelerators. General tools for modeling single-particle accelerator dynamics have been in wide use for many years. Multi-particle dynamics are much more computationally demanding than single-particle dynamics, requiring supercomputers or parallel clusters of PCs. Because of this, simulations of...Go to contribution page
-
L. Nellen (I. DE CIENCIAS NUCLEARES, UNAM)27/09/2004, 18:10The Pierre Auger Observatory is designed to unveil the nature and the origin of the highest energy cosmic rays. Two sites, one currently under construction in Argentina, and another pending in the Northern hemisphere, will observe extensive air showers using a hybrid detector comprising a ground array of 1600 water Cerenkov tanks overlooked by four atmospheric fluorescence detectors. ...Go to contribution page
-
Les Robertson (CERN)28/09/2004, 08:30The talk will cover briefly the current status of the LHC Computing Grid project and will discuss the main challenges facing us as we prepare for the startup of LHC.Go to contribution page
-
I. Bird (CERN)28/09/2004, 09:00In September 2003 the first LCG-1 service was put into production at most of the large Tier 1 sites and was quickly expanded up to 30 Tier 1 and Tier 2 sites by the end of the year. Several software upgrades were made and the LCG-2 service was put into production in time for the experiment data challenges that began in February 2004 and continued for several months. In particular...Go to contribution page
-
28/09/2004, 09:30The U.S. Trillium Grid projects in collaboration with High Energy Experiment groups from the Large Hadron Collider (LHC), ATLAS and CMS, Fermi-Lab's BTeV, members of the LIGO , SDSS collaborations and groups from other scientific disciplines and computational centers have deployed a multi-VO, application-driven grid laboratory ("Grid3"). The grid laboratory has sustained for several...Go to contribution page
-
Z. Toteva (Sofia University/CERN/CMS)28/09/2004, 10:00We describe a database solution in a web application to centrally manage the configuration information of computer systems. It extends the modular cluster management tool Quattor with a user friendly web interface. System configurations managed by Quattor are described with the aid of PAN, a declarative language with a command line and a compiler interface. Using a relational schema,...Go to contribution page
-
A. Bobyshev (FERMILAB)28/09/2004, 10:00In a large campus network, such as Fermilab's ten thousand nodes, scanning initiated from either outside of or within the campus network raises security concerns, may have very serious impact on network performance, and even disrupt normal operation of many services. In this paper we introduce a system for detecting and automatic blocking of excessive traffic of different nature, scanning,...Go to contribution page
-
Martin purschke28/09/2004, 10:00With the improvements in CPU and disk speed over the past years, we were able to exceed the original design data logging rate of 40MB/s by a factor of 3 already for the Run 3 in 2002. For the Run 4 in 2003, we increased the raw disk logging capacity further to about 400MB/s. Another major improvement was the implementation of compressed data logging. The PHENIX raw data, after...Go to contribution page
-
M. Guijarro (CERN)28/09/2004, 10:00There are two cluster architecture approaches used at CERN to provide central CVS services. The first one (http://cern.ch/cvs) depends on AFS for central storage of repositories and offers automatic load-balancing and fail-over mechanisms. The second one (http://cern.ch/lcgcvs) is an N + 1 cluster based on local file systems, using data replication and not relying on AFS. It does not...Go to contribution page
-
Martin purschke28/09/2004, 10:00The PHENIX DAQ system is managed by a control system responsible for the configuration and monitoring of the PHENIX detector hardware and readout software. At its core, the control system, called Runcontrol, is a process that manages the various components by way of a distributed architecture using CORBA. The control system, called Runcontrol, is a set of process that manages virtually...Go to contribution page
-
J. Schmidt (Fermilab)28/09/2004, 10:00Email is an essential part of daily work. The FNAL gateways process in excess of 700,000 messages per week. Amomng those messages are many containing viruses and unwanted spam. This paper outlines the FNAL email system configuration. We will discuss how we have defined our systems to provide optimum uptime as well as protection against viruses, spam and unauthorized users.Go to contribution page
-
L. Lisa Giacchetti (FERMILAB)28/09/2004, 10:00The scalable serving of shared filesystems across large clusters of computing resources continues to be a difficult problem in high energy physics computing. The US CMS group at Fermilab has performed a detailed evaluation of hardware and software solutions to allow filesysystem access to data from computing systems. The goal of the evaluation was to arrive at a solution that was able...Go to contribution page
-
S. Kolos (CERN)28/09/2004, 10:00As modern High Energy Physics (HEP) experiments require more distributed computing power to fulfill their demands, the need for an efficient distributed online services for control, configuration and monitoring in such experiments becomes increasingly important. This paper describes the experience of using standard Common Object Request Broker Architecture (CORBA) middleware for...Go to contribution page
-
J. Fromm (Fermilab)28/09/2004, 10:00The NGOP Monitoring Project at FNAL has developed a package which has demonstrated the capability to efficiently monitor tens of thousands of entities on thousands of hosts, and has been in operation for over 4 years. The project has met the majority of its initial reqirements, and also the majority of the requirements discovered along the way. This paper will describe what worked, and...Go to contribution page
-
S. Jarp (CERN)28/09/2004, 10:00In 1995 I predicted that the dual-processor PC would start invading HEP computing and a couple of years later the x86-based PC was omnipresent in our computing facilities. Today, we cannot imagine HEP computing without thousands of PCs at the heart. This talk will look at some of the reasons why we may one day be forced to leave this sweet-spot. This would be not because we (the HEP...Go to contribution page
-
F.M. Taurino (INFM - INFN)28/09/2004, 10:00The "gridification" of a computing farm is usually a complex and time consuming task. Operating system installation, grid specific software, configuration files customization can turn into a large problem for site managers. This poster introduces InGRID, a solution used to install and maintain grid software on small/medium size computing farms. Grid elements installation with InGRID...Go to contribution page
-
G. Sun (INSTITUE OF HIGH ENERGY PHYSICS)28/09/2004, 10:00These are several on-going experiments at IHEP, such as BES, YBJ, and CMS collaboration with CERN. each experiment has its own computing system, these computing systems run separately. This leads to a very low CPU utilization due to different usage period of each experiment. The Grid technology is a very good candidate for integrating these separate computing systems into a "single...Go to contribution page
-
H. Schwarthoff (CORNELL UNIVERSITY)28/09/2004, 10:00The CLEO collaboration at the Cornell electron positron storage ring CESR has completed its transition to the CLEO-c experiment. This new program contains a wide array of Physics studies of $e^+e^-$ collisions at center of mass energies between 3 GeV and 5 GeV. New challenges await the CLEO-c Online computing system, as the trigger rates are expected to rise from < 100 Hz to around...Go to contribution page
-
N. Hoeimyr (CERN IT)28/09/2004, 10:00The Product Support (PS) group of the IT department at CERN distributes and supports more than one hundred different software packages, ranging from tools for computer aided design, field calculations, mathematical and structural analysis to software development. Most of these tools, which are used on a variety of Unix and Windows platforms by different user populations, are...Go to contribution page
-
A. Bobyshev (FERMILAB)28/09/2004, 10:00Network flow data gathered on border routers and core network switch/routers is used at Fermilab for statistical analysis of traffic patterns, passive network monitoring, and estimation of network performance characteristics. Flow data is also a critical tool in the investigation of computer security incidents. Development and enhancement of flow- based tools is on-going effort. The...Go to contribution page
-
I. Sfiligoi (INFN Frascati)28/09/2004, 10:00CDF is deploying a version of its analysis facility (CAF) at several globally distributed sites. On top of the hardware at each of these sites is either an FBSNG or Condor batch manager and a SAM data handling system which in some cases also makes use of dCache. The jobs which run at these sites also make use of a central database located at Fermilab. Each of these systems has its own...Go to contribution page
-
N. Katayama (KEK)28/09/2004, 10:00The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity now exceeds 800pb- 1. These numbers correspond to more than 1PB of raw and processed data stored on tape and an accumulation of the raw data at the rate of 1TB/day. To meet these storage demands, a new cost effective, compact hierarchical mass storage system has...Go to contribution page
-
Martin purschke28/09/2004, 10:00The PHENIX experiment consists of many different detectors and detector types, each one with its own needs concerning the monitoring of the data quality and the calibration. To ease the task for the shift crew to monitor the performance and status of each subsystem in PHENIX we developed a general client server based framework which delivers events at a rate in excess of 100Hz....Go to contribution page
-
S. Nemnyugin (ASSOCIATE PROFESSOR)28/09/2004, 10:00We report the results of parallelization and tests of the Parton String Model event generator at the parallel cluster of St.Petersburg State University Telecommunication center. Two schemes of parallelization were studied. In the first approach master process coordinates work of slave processes, gathers and analyzes data. Results of MC calculations are saved in local files. Local...Go to contribution page
-
J. Schmidt (Fermilab)28/09/2004, 10:00FNAL has over 5000 PCs running either Linux or Windows software. Protecting these systems efficiently against the latest vulnerabilities that arise has prompted FNAL to take a more central approach to patching systems. We outline the lab support structure for each OS and how we have provided a central solution that works within existing support boundaries. The paper will cover how we...Go to contribution page
-
P. Conde MUINO (CERN)28/09/2004, 10:00During the runtime of any experiment, a central monitoring system that detects problems as soon as they appear has an essential role. In a large experiment, like Atlas, the online data acquisition system is distributed across the nodes of large farms, each of them running several processes that analyse a fraction of the events. In this architecture, it is necessary to have a central...Go to contribution page
-
A. Eleuteri (DIPARTIMENTO DI SCIENZE FISICHE - UNIVERSITร DI NAPOLI FEDERICO II)28/09/2004, 10:00In this paper we examine the performance of the raw Ethernet protocol in deterministic, low-cost, real-time communication. Very few applications have been reported until now, and they focus on the use of the TCP and UDP protocols, which however add a sensible overhead to the communication and reduce the useful bandwidth. We show how low-level Ethernet access can be used for...Go to contribution page
-
28/09/2004, 10:00The CLEO III data acquisition was from the beginning in the late 90's designed to allow remote operations and monitoring of the experiment. Since changes in the coordination and operation of the CLEO experiment two years ago enabled us to separate tasks of the shift crew into an operational and a physics task, existing remote capabilities have been revisited. In 2002/03 CLEO started to...Go to contribution page
-
A. Garcia (KARLSRUHE RESEARCH CENTER (FZK))28/09/2004, 10:00The clusters using DataGrid middleware are usually installed and managed by means of an "LCFG" server. Originally developed by the Univ. of Edinburgh and extended by DataGrid, this is a complex piece of software. It allows for automated installation and configuration of a complete grid site. However, installation of the "LCFG"-Server takes most of the time, thus hinder widespread...Go to contribution page
-
V. GAUTARD (CEA-SACLAY)28/09/2004, 10:00ATLAS is a particle detector which will is being built at CERN in Geneva. The muon detection system is made up among other things, of 600 chambers measuring 2 to 6 m2 and 30 cm thick. The chambers' position must be known with an accuracy of +/- 30 m for translations and +/-100 rad for rotations for a range of +/- 5mm and +/-5mrad. In order to fulfill these requirements, we have...Go to contribution page
-
G. unel (UNIVERSITY OF CALIFORNIA AT IRVINE AND CERN)28/09/2004, 10:00The 40 MHz collision rate at the LHC produces ~25 interactions per bunch crossing within the ATLAS detector, resulting in terabytes of data per second to be handled by the detector electronics and the trigger and DAQ system. A Level 1 trigger system based on custom designed and built electronics will reduce the event rate to 100 kHz. The DAQ system is responsible for the readout of the...Go to contribution page
-
Ian FISK (FNAL)28/09/2004, 10:00US-CMS is building up expertise at regional centers in preparation for analysis of LHC data. The User Analysis Farm (UAF) is part of the Tier 1 facility at Fermilab. The UAF is being developed to support the efforts of the Fermilab LHC Physics Center (LPC) and to enableefficient analysis of CMS data in the US. The support, infrastructure, and services to enable a local analysis...Go to contribution page
-
28/09/2004, 10:00The CDF Analysis Facility (CAF) has been in use since April 2002 and has successfully served 100s of users on 1000s of CPUs. The original CAF used FBSNG as a batch manager. In the current trend toward multisite deployment, FBSNG was found to be a limiting factor, so the CAF has been reimplemented to use Condor instead. Condor is a more widely used batch system and is well integrated...Go to contribution page
-
I. Soloviev (CERN/PNPI)28/09/2004, 10:00The ATLAS data acquisition system uses the database to describe configurations for different types of data taking runs and different sub-detectors. Such configurations are composed of complex data objects with many inter-relations. During the DAQ system initialisation phase the configurations database is simultaneously accessed by a large number of processes. It is also required that such...Go to contribution page
-
A. Martin (QUEEN MARY, UNIVERSITY OF LONDON)28/09/2004, 10:00We describe our experience in building a cost efficient High Throughput Cluster (HTC) using commodity hardware and free software within a university environment. Our HTC has a modular system architecture and is designed to be upgradable. The current, second phase configuration, consists of 344 processors and 20 Tbyte of RAID storage. In order to rapidly install and upgrade software,...Go to contribution page
-
O. Schneider (FZK)28/09/2004, 10:00A central idea of Grid Computing is the virtualization of heterogeneous resources. To meet this challenge the Institute for Scientific Computing, IWR, has started the project CampusGrid. Its medium term goal is to provide a seamless IT environment supporting the on-site research activities in physics, bioinformatics, nanotechnology and meteorology. The environment will include all...Go to contribution page
-
Alan Tackett28/09/2004, 10:00Protein analysis, imaging, and DNA sequencing are some of the branches of biology where growth has been enabled by the availability of computational resources. With this growth, biologists face an associated need for reliable, flexible storage systems. For decades the HEP community has been driving the development of such storage systems to meet their own needs. Two of these systems -...Go to contribution page
-
A. Bobyshev (FERMILAB)28/09/2004, 10:00The Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider (LHC) is scheduled to come on-line in 2007. Fermilab will act as the CMS Tier-1 center for the US and make experiment data available to more than 400 researchers in the US participating in the CMS experiment. The US CMS Users Facility group, based at Fermilab, has initiated a project to develop a model for...Go to contribution page
-
M. Ellisman (National Center for Microscopy and Imaging Research of the Center for Research in Biological Systems - The Department of Neurosciences, University of California San Diego School of Medicine - La Jolla, California - USA)28/09/2004, 11:00The grand goal in neuroscience research is to understand how the interplay of structural, chemical and electrical signals in nervous tissue gives rise to behavior. Experimental advances of the past decades have given the individual neuroscientist an increasingly powerful arsenal for obtaining data, from the level of molecules to nervous systems. Scientists have begun the arduous and...Go to contribution page
-
David Kelsey (RAL)28/09/2004, 11:30The aim of Grid computing is to enable the easy and open sharing of resources between large and highly distributed communities of scientists and institutes across many independent administrative domains. Convincing site security officers and computer centre managers to allow this to happen in view of today's ever-increasing Internet security problems is a major challenge. Convincing...Go to contribution page
-
Ken Peach (RAL)28/09/2004, 12:00Just as the development of the World Wide Web has had its greatest impact outside particle physics, so it will be with the development of the Grid. E-science, of which the Grid is just a small part, is already making a big impact upon many scientific disciplines, and facilitating new scientific discoveries that would be difficult to achieve in any other way. Key to this is the...Go to contribution page
-
Max Lemke28/09/2004, 12:30The European Grid Research vision as set out in the Information Society Technologies Work Programmes of the EU's Sixth Research Framework Programme is to advance, consolidate and mature Grid technologies for widespread e-science, industrial, business and societal use. A batch of Grid research projects with 52 Million EUR EU support was launched during the European Grid Technology Days 15...Go to contribution page
-
Miron Livny (Wisconsin)29/09/2004, 08:30In the 18 months since the CHEP03 meeting in San Diego, the HEP community deployed the current generation of grid technologies in a veracity of settings. Legacy software as well as recently developed applications was interfaced with middleware tools to deliver end-to-end capabilities to HEP experiments in different stages of their life cycles. In a series of data challenges,...Go to contribution page
-
Andrew Sutherland (ORACLE)29/09/2004, 09:00Dr Sutherland will review the evolution of computing over the past decade, focusing particularly on the development of the database and middleware from client server to Internet computing. But what are the next steps from the perspective of a software company? Dr Sutherland will discuss the development of Grid as well as the future applications revolving around collaborative...Go to contribution page
-
Jai Menon (IBM)29/09/2004, 09:30In this talk, we will discuss the future of storage systems. In particular, we will focus on several big challenges which we are facing in storage, such as being able to build, manage and backup really massive storage systems, being able to find information of interest, being able to do long-term archival of data, and so on. We also present ideas and research being done to address...Go to contribution page
-
T. Coviello (INFN Via E. Orabona 4 I - 70126 Bari Italy)29/09/2004, 10:00A grid system is a set of heterogeneous computational and storage resources, distributed on a large geographic scale, which belong to different administrative domains and serve several different scientific communities named Virtual Organizations (VOs). A virtual organization is a group of people or institutions which collaborate to achieve common objectives. Therefore such system has...Go to contribution page
-
G. Rubini (INFN-CNAF)29/09/2004, 10:00Analyzing Grid monitoring data requires the capability of dealing with multidimensional concepts intrinsic to Grid systems. The meaningful dimensions identified in recent works are the physical dimension referring to geographical location of resources, the Virtual Organization (VO) dimension, the time dimension and the monitoring metrics dimension. In this paper, we discuss the...Go to contribution page
-
M. Jones (Manchester University)29/09/2004, 10:00The BaBar experiment has accumulated many terabytes of data on particle physics reactions, accessed by a community of hundreds of users. Typical analysis tasks are C++ programs, individually written by the user, using shared templates and libraries. The resources have outgrown a single platform and a distributed computing model is needed. The grid provides the natural toolset....Go to contribution page
-
T. Coviello (DEE โ POLITECNICO DI BARI, V. ORABONA, 4, 70125 โ BARI,ITALY)29/09/2004, 10:00Grid computing is a large scale geographically distributed and heterogeneous system that provides a common platform for running different grid enabled applications. As each application has different characteristics and requirements, it is a difficult task to develop a scheduling strategy able to achieve optimal performance because application-specific and dynamic system status have...Go to contribution page
-
The ARDA Team29/09/2004, 10:00The ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end...Go to contribution page
-
D. Malon (ANL)29/09/2004, 10:00As ATLAS begins validation of its computing model in 2004, requirements imposed upon ATLAS data management software move well beyond simple persistence, and beyond the "read a file, write a file" operational model that has sufficed for most simulation production. New functionality is required to support the ATLAS Tier 0 model, and to support deployment in a globally distributed...Go to contribution page
-
L. Poncet (LAL-IN2p3)29/09/2004, 10:00In the last few years grid software (middleware) has become available from various sources. However, there are no standards yet which allow for an easy integration of different services. Moreover, middleware was produced by different projects with the main goal of developing new functionalities rather than production quality software. In the context of the LHC Computing Grid...Go to contribution page
-
T. Wlodek (Brookhaven National Lab)29/09/2004, 10:00A description of a Condor-based, Grid-aware batch software system configured to function asynchronously with a mass storage system is presented. The software is currently used in a large Linux Farm (2700+ processors) at the RHIC and ATLAS Tier 1 Computing Facility at Brookhaven Lab. Design, scalability, reliability, features and support issues with a complex Condor-based batch...Go to contribution page
-
A. Wagner (CERN)29/09/2004, 10:00CERN has about 5500 Desktop PCs. These computers offer a large pool of resources that can be used for physics calculations outside office hours. The paper describes a project to make use of the spare CPU cycles of these PCs for LHC tracking studies. The client server application is implemented as a lightweight, modular screensaver and a Web Application containing the physics job...Go to contribution page
-
P. Love (Lancaster University)29/09/2004, 10:00Building on several years of sucess with the MCRunjob projects at DZero and CMS, the fermilab sponsored joint Runjob project aims to provide a Workflow description language common to three experiments: DZero, CMS and CDF. This project will encapsulate the remote processing experiences of the three experiments in an extensible software architecture using web services as...Go to contribution page
-
T. Harenberg (UNIVERSITY OF WUPPERTAL)29/09/2004, 10:00The D0 experiment at the Tevatron is collecting some 100 Terabytes of data each year and has a very high need of computing resources for the various parts of the physics program. D0 meets these demands by establishing a world - increasingly based on GRID technologies. Distributed resources are used for D0 MC production and data reprocessing of 1 billion events, requiring 250 TB to be...Go to contribution page
-
O. Smirnova (Lund University, Sweden)29/09/2004, 10:00In common grid installations, services responsible for storing big data chunks, replication of those data and indexing their availability are usually completely decoupled. And a task of synchronizing data is passed to either user-level tools or separate services (like spiders) which are subject to failure and usually cannot perform properly if one of underlying services fails too. The...Go to contribution page
-
D. Wicke (Fermilab)29/09/2004, 10:00Abstract: The D0 experiment faces many challenges enabling access to large datasets for physicists on 4 continents. The strategy of solving these problems on worlwide distributed computing clusters is followed. Already since the begin of TEvatron RunII (March 2001) all Monte-Carlo simulations are produced outside of Fermilab at remote systems. For analyses as system of regional...Go to contribution page
-
L. Lueking (FERMILAB)29/09/2004, 10:00The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivery of the data to users and processing farms based around the world has represented major challenges to both experiments. The range of applications employing databases includes data management, calibration (conditions), trigger information, run...Go to contribution page
-
S. Stonjek (Fermi National Accelerator Laboratory / University of Oxford)29/09/2004, 10:00CDF is an experiment at the Tevatron at Fermilab. One dominating factor of the experiments' computing model is the high volume of raw, reconstructed and generated data. The distributed data handling services within SAM move these data to physics analysis applications. The SAM system was already in use at the D-Zero experiment. Due to difference in the computing model of the...Go to contribution page
-
I. Stokes-Rees (UNIVERSITY OF OXFORD PARTICLE PHYSICS)29/09/2004, 10:00The DIRAC system developed for the CERN LHCb experiment is a grid infrastructure for managing generic simulation and analysis jobs. It enables jobs to be distributed across a variety of computing resources, such as PBS, LSF, BQS, Condor, Globus, LCG, and individual workstations. A key challenge of distributed service architectures is that there is no single point of control over...Go to contribution page
-
V. garonne (CPPM-IN2P3 MARSEILLE)29/09/2004, 10:00The Workload Management System (WMS) is the core component of the DIRAC distributed MC production and analysis grid of the LHCb experiment. It uses a central Task database which is accessed via a set of central Services with Agents running on each of the LHCb sites. DIRAC uses a 'pull' paradigm where Agents request tasks whenever they detect their local resources are available. The...Go to contribution page
-
M.G. Pia (INFN GENOVA)29/09/2004, 10:00We show how nowadays it is possible to achieve the goal of accuracy and fast computation response in radiotherapic dosimetry using Monte Carlo methods, together with a distributed computing model. Monte Carlo methods have never been used in clinical practice because, even if they are more accurate than available commercial software, the calculation time needed to accumulate sufficient...Go to contribution page
-
L. Guy (CERN)29/09/2004, 10:00Extensive and thorough testing of the EGEE middleware is essential to ensure that a production quality Grid can be deployed on a large scale as well as across the broad range of heterogeneous resources that make up the hundreds of Grid computing centres both in Europe and worldwide. Testing of the EGEE middleware encompasses the tasks of both verification and validation. In adition...Go to contribution page
-
L. Matyska (CESNET, CZECH REPUBLIC)29/09/2004, 10:00The Logging and Bookkeeping service tracks job passing through the Grid. It collects important events generated by both the grid middleware components and applications, and processes them at a chosen L&B server to provide the job state. The events are transported through secure reliable channels. Job tracking is fully distributed and does not depend on a single information source, the...Go to contribution page
-
P. Mendez Lorenzo (CERN IT/GD)29/09/2004, 10:00In a Grid environment, the access to information on system resources is a necessity in order to perform common tasks such as matching job requirements with available resources, accessing files or presenting monitoring information. Thus both middleware service, like workload and data management, and applications, like monitoring tools, requiere an interface to the Grid information...Go to contribution page
-
X. Zhao (Brookhaven National Laboratory)29/09/2004, 10:00This paper describes the deployment and configuration of the production system for ATLAS Data Challenge 2 starting in May 2004, at Brookhaven National Laboratory, which is the Tier1 center in the United States for the International ATLAS experiment. We will discuss the installation of Windmill (supervisor) and Capone (executor) software packages on the submission host and the relevant...Go to contribution page
-
R. santinelli (CERN/IT/GD)29/09/2004, 10:00The management of Application and Experiment Software represents a very common issue in emerging grid-aware computing infrastructures. While the middleware is often installed by system administrators at a site via customized tools that serve also for the centralized management of the entire computing facility, the problem of installing, configuring and validating Gigabytes of Virtual...Go to contribution page
-
R. Walker (Simon Fraser University)29/09/2004, 10:00A large number of Grids have been developed, motivated by geo-political or application requirements. Despite being mostly based on the same underlying middleware, the Globus Toolkit, they are generally not inter-operable for a variety of reasons. We present a method of federating those disparate grids which are based on the Globus Toolkit, together with a concrete example of interfacing...Go to contribution page
-
V. Fine (BROOKHAVEN NATIONAL LABORATORY)29/09/2004, 10:00Most HENP experiment software includes a logging or tracing API allowing for displaying in a particular format important feedback coming from the core application. However, inserting log statements into the code is a low-tech method for tracing the program execution flow and often leads to a flood of messages in which the relevant ones are occluded. In a distributed computing...Go to contribution page
-
R. Barbera (Univ. Catania and INFN Catania)29/09/2004, 10:00Computational and data grids are now entering a more mature phase where experimental test-beds are turned into production quality infrastructures operating around the clock. All this is becoming true both at national level, where an example is the Italian INFN production grid (http://grid-it.cnaf.infn.it), and at the continental level, where the most strinking example is the European Union...Go to contribution page
-
T. ANTONI (GGUS)29/09/2004, 10:00For very large projects like the LHC Computing Grid Project (LCG) involving 8,000 scientists from all around the world, it is an indispensable requirement to have a well organized user support. The Institute for Scientific Computing at the Forschungszentrum Karlsruhe started implementing a Global Grid User Support (GGUS) after official assignment of the Grid Deployment Board in March...Go to contribution page
-
A. Retico (CERN)29/09/2004, 10:00The installation and configuration of LCG middleware, as it is currently being done, is complex and delicate. An โaccurateโ configuration of all the services of LCG middleware requires a deep knowledge of the inside dynamics and hundreds of parameters to be dealt with. On the other hand, the number of parameters and flags that are strictly needed in order to run a working โdefaultโ...Go to contribution page
-
L. Field (CERN)29/09/2004, 10:00This paper reports on the deployment experience of the defacto grid information system, Globus MDS, in a large scale production grid. The results of this experience led to the development of an information caching system based on a standard openLDAP database. The paper then describes how this caching system was developed further into a production quality information system including a...Go to contribution page
-
H. Tallini (IMPERIAL COLLEGE LONDON)29/09/2004, 10:00GROSS (GRidified Orca Submission System) has been developed to provide CMS end users with a single interface for running batch analysis tasks over the LCG-2 Grid. The main purpose of the tool is to carry out job splitting, preparation, submission, monitoring and archiving in a transparent way which is simple to use for the end user. Central to its design has been the requirement for...Go to contribution page
-
A. Gellrich (DESY)29/09/2004, 10:00DESY is one of the world-wide leading centers for research with particle accelerators and a center for research with synchrotron light. The hadron-electron collider HERA houses four experiments which are taking data and will be operated until 2006 at least. The computer center manages a data volumes of order 1 PB and is the home for around 1000 CPUs. In 2003 DESY started to set up a...Go to contribution page
-
M. Burgon-Lyon (UNIVERSITY OF GLASGOW)29/09/2004, 10:00JIM (Job and Information Management) is a grid extension to the mature data handling system called SAM (Sequential Access via Metadata) used by the CDF, DZero and Minos Experiments based at Fermilab. JIM uses a thin client to allow job submissions from any computer with Internet access, provided the user has a valid certificate or kerberos ticket. On completion the job output can be...Go to contribution page
-
A. Anjum (NIIT)29/09/2004, 10:00In the context of Interactive Grid-Enabled Analysis Environment (GAE), physicists desire bi-directional interaction with the job they submitted. In one direction, monitoring information about the job and hence a โprogress barโ should be provided to them. On other direction, physicist should be able to control their jobs. Before submission, they may direct the job to some specified...Go to contribution page
-
A. Anjum (NIIT)29/09/2004, 10:00Grid is emerging as a great computational resource but its dynamic behaviour makes the Grid environment unpredictable. System failure or network failure can occur or the system performance can degrade. So once the job has been submitted monitoring becomes very essential for user to ensure that the job is completed in an efficient way. In current environments once user submits a job he...Go to contribution page
-
G. Donvito (UNIVERSITร DEGLI STUDI DI BARI), G. Tortone (INFN Napoli)29/09/2004, 10:00In a wide-area distributed and heterogeneous grid environment, monitoring represents an important and crucial task. It includes system status checking, performance tuning, bottlenecks detecting, troubleshooting, fault notifying. In particular a good monitoring infrastructure must provide the information to track down the current status of a job in order to locate any problems....Go to contribution page
-
E.M.V. Fasanelli (I.N.F.N.)29/09/2004, 10:00The infn.it AFS cell has been providing a useful single file-space and authentication mechanism for the whole INFN, but the lack of a distributed management system, has lead several INFN sections and LABs to setup local AFS cells. The hierarchical transitive cross-realm authentication introduced in the Kerberos 5 protocol and the new versions of the OpenAFS and MIT implementation of...Go to contribution page
-
D. Rebatto (INFN - MILANO)29/09/2004, 10:00In this paper we present an overview of the implementation of the LCG interface for the ATLAS production system. In order to take profit of the features provided by DataGRID software, on which LCG is based, we implemented a Python module, seamless integrated into the Workload Management System, which can be used as an object-oriented API to the submission services. On top of it we...Go to contribution page
-
L. Tuura (NORTHEASTERN UNIVERSITY, BOSTON, MA, USA)29/09/2004, 10:00Experiments frequently produce many small data files for reasons beyond their control, such as output splitting into physics data streams, parallel processing on large farms, database technology incapable of concurrent writes into a single file, and constraints from running farms reliably. Resulting data file size is often far from ideal for network transfer and mass storage performance....Go to contribution page
-
S. Thorn29/09/2004, 10:00The University of Edinburgh has an significant interest in mass storage systems as it is one of the core groups tasked with the roll out of storage software for the UK's particle physics grid, GridPP. We present the results of a development project to provide software interfaces between the SDSC Storage Resource Broker, the EU DataGrid and the Storage Resource Manager. This project was...Go to contribution page
-
I. Legrand (CALTECH)29/09/2004, 10:00The design and optimization of the Computing Models for the future LHC experiments, based on the Grid technologies, requires a realistic and effective modeling and simulation of the data access patterns, the data flow across the local and wide area networks, and the scheduling and workflow created by many concurrent, data intensive jobs on large scale distributed systems. This paper...Go to contribution page
-
E. Berman (FERMILAB)29/09/2004, 10:00Fermilab operates a petabyte scale storage system, Enstore, which is the primary data store for experiments' large data sets. The Enstore system regularly transfers greater than 15 Terabytes of data each day. It is designed using a client-server architecture providing sufficient modularity to allow easy addition and replacement of hardware and software components. Monitoring of this...Go to contribution page
-
G. Zito (INFN BARI)29/09/2004, 10:00The complexity of the CMS Tracker (more than 50 million channels to monitor) now in construction in ten laboratories worldwide with hundreds of interested people , will require new tools for monitoring both the hardware and the software. In our approach we use both visualization tools and Grid services to make this monitoring possible. The use of visualization enables us to represent...Go to contribution page
-
D. Sanders (UNIVERSITY OF MISSISSIPPI)29/09/2004, 10:00High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent...Go to contribution page
-
I. Adachi (KEK)29/09/2004, 10:00The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity has exceeded 800pb-1. This requires more efficient and reliable way of event processing. To meet this requirement, new offline processing scheme has been constructed, based upon technique employed for the Belle online reconstruction farm. Event processing is...Go to contribution page
-
E. Berdnikov (INSTITUTE FOR HIGH ENERGY PHYSICS, PROTVINO, RUSSIA)29/09/2004, 10:00The scope of this work is the study of scalability limits of the Certification Authority (CA), running for large scale GRID environments. The operation of Certification Authority is analyzed from the view of the rate of incoming requests, complexity of authentication procedures, LCG security restrictions and other limiting factors. It is shown, that standard CA operational...Go to contribution page
-
C. Nicholson (UNIVERSITY OF GLASGOW)29/09/2004, 10:00In large-scale Grids, the replication of files to different sites is an important data management mechanism which can reduce access latencies and give improved usage of resources such as network bandwidth, storage and computing power. In the search for an optimal data replication strategy, the Grid simulator OptorSim was developed as part of the European DataGrid project. Simulations of...Go to contribution page
-
G. Shabratova (Joint Institute for Nuclear Research (JINR))29/09/2004, 10:00The report presents an analysis of the Alice Data Challenge 2004. This Data Challenge has been performed on two different distributed computing environments. The first one is the Alice Environment for distributed computing (AliEn) used standalone. Presently this environment allows ALICE physicists to obtain results on simulation, reconstruction and analysis of data in ESD format for...Go to contribution page
-
S. Mrenna (FERMILAB)29/09/2004, 10:00PATRIOT is a project that aims to provide better predictions of physics events for the high-Pt physics program of Run2 at the Tevatron collider. Central to Patriot is an enstore or mass storage repository for files describing the high-Pt physics predictions. These are typically stored as StdHep files which can be handled by CDF and D0 and run through detector and triggering...Go to contribution page
-
B. Quinn (The University of Mississippi)29/09/2004, 10:00The D0 experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed the capabilities of any one institution. Moreover, the widely scattered geographical distribution of collaborators poses further serious...Go to contribution page
-
A. Anjum (NIIT)29/09/2004, 10:00Grid computing provides key infrastructure for distributed problem solving in dynamic virtual organizations. However, Grids are still the domain of a few highly trained programmers with expertise in networking, high-performance computing, and operating systems. One of the big issues in the full-scale usage of a grid is the matching of the resource requirements of a job submission to...Go to contribution page
-
29/09/2004, 10:00For The BaBar Computing Group BaBar has recently moved away from using Objectivity/DB for it's event store towards a ROOT-based event store. Data in the new format is produced at about 20 institutions worldwide as well as at SLAC. Among new challenges are the organization of data export from remote institutions, archival at SLAC and making the data visible to users for analysis and...Go to contribution page
-
A. Hasan (SLAC)29/09/2004, 10:00We describe the production experience gained from implementing and using exclusively the San Diego Super Computer Center developed Storage Resource Broker (SRB) to distribute the BaBar experiment's production event data stored in ROOT files from the experiment center at SLAC, California, USA to a Tier A computing center at ccinp3, Lyon France. In addition we outline how the system can...Go to contribution page
-
D. Andreotti (INFN Sezione di Ferrara)29/09/2004, 10:00The BaBar experiment has been taking data since 1999. In 2001 the computing group started to evaluate the possibility to evolve toward a distributed computing model in a Grid environment. In 2003, a new computing model, described in other talks, was implemented, and ROOT I/O is now being used as the Event Store. We implemented a system, based onthe LHC Computing Grid (LCG) tools, to submit...Go to contribution page
-
I. Terekhov (FERMI NATIONAL ACCELERATOR LABORATORY)29/09/2004, 10:00SAMGrid is a globally distributed system for data handling and job management, developed at Fermilab for the D0 and CDF experiments in Run II. The Condor system is being developed at the University of Wisconsin for management of distributed resources, computational and otherwise. We briefly review the SAMGrid architecture and its interaction with Condor, which was presented earlier. We...Go to contribution page
-
A. Lyon (FERMI NATIONAL ACCELERATOR LABORATORY)29/09/2004, 10:00The SAMGrid team is in the process of implementing a monitoring and information service, which fulfills several important roles in the operation of the SAMGrid system, and will replace the first generation of monitoring tools in the current deployments. The first generation tools are in general based on text logfiles and represent solutions which are not scalable or maintainable. The...Go to contribution page
-
E. Slabospitskaya (Institute for High Energy Physics,Protvino,Russia)29/09/2004, 10:00Storage Resource Manager (SRM) and Grid File Access Library (GFAL) are GRID middleware components used for transparent access to Storage Elements. SRM provides a common interface (WEB service) to backend systems giving dynamic space allocation and file management. GFAL provides a mechanism whereby an application software can access a file at a site without having to know which transport...Go to contribution page
-
V. Bartsch (OXFORD UNIVERSITY)29/09/2004, 10:00To distribute computing for CDF (Collider Detector at Fermilab) a system managing local compute and storage resources is needed. For this purpose CDF will use the DCAF (Decentralized CDF Analysis Farms) system which is already at Fermilab. DCAF has to work with the data handling system SAM (Sequential Access to data via Metadata). However, both DCAF and SAM are mature systems which...Go to contribution page
-
R. JONES (LANCAS)29/09/2004, 10:00The ATLAS Computing Model is under continuous active development. Previous exercises focussed on the Tier-0/Tier-1 interactions, with an emphasis on the resource implications and only a high-level view of the data and workflow. The work presented here considerably revises the resource implications, and attempts to describe in some detail the data and control flow from the High Level...Go to contribution page
-
Douglas Smith (Stanford Linear Accelerator Center)29/09/2004, 10:00The new BaBar bookkeeping system comes with tools to directly support data analysis tasks. This Task Manager system acts as an interface between datasets defined in the bookkeeping system, which are used as input to analyzes, and the offline analysis framework. The Task Manager organizes the processing of the data by creating specific jobs to be either submitted to a batch system, or...Go to contribution page
-
A. Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)29/09/2004, 10:00The D0 experiment relies on large scale computing systems to achieve her physics goals. As the experiment lifetime spans, multiple generations of computing hardware, it is fundemental to make projective models in to use available resources to meet the anticipated needs. In addition, computing resources can be supplied as in-kind contributions by collaborating institutions and...Go to contribution page
-
C. ARNAULT (CNRS)29/09/2004, 10:00One of the most important problems in software management of a very large and complex project such as Atlas is how to deploy the software on the running sites. By running sites we include computer sites ranging from computing centers in the usual sense down to individual laptops but also the computer elements of a computing grid organization. The deployment activity consists in...Go to contribution page
-
S. Bagnasco (INFN Torino)29/09/2004, 10:00AliEn (ALICE Environment) is a GRID middleware developed and used in the context of ALICE, the CERN LHC heavy-ion experiment. In order to run Data Challenges exploiting both AliEn โnativeโ resources and any infrastructure based on EDG-derived middleware (such as the LCG and the Italian GRID.IT), an interface system was designed and implemented; some details of a prototype were already...Go to contribution page
-
J. kennedy (LMU Munich)29/09/2004, 10:00This paper presents an overview of the legacy interface provided for the ATLAS DC2 production system. The term legacy refers to any non-grid system which may be deployed for use within DC2. The reasoning behind providing such a service for DC2 is twofold in nature. Firstly, the legacy interface provides a backup solution should unforeseen problems occur while developing the grid...Go to contribution page
-
A. Kreymer (FERMILAB)29/09/2004, 10:00The Fermilab CDF Run-II experiment is now providing official support for remote computing, expanding this to about 1/4 of the total CDF computing during the Summer of 2004. I will discuss in detail the extensions to CDF software distribution and configuration tools and procedures, in support of CDF GRID/DCAF computing for Summer 2004. We face the challenge of unreliable networks, time...Go to contribution page
-
29/09/2004, 10:00In the High Energy Physics (HEP) community, Grid technologies have been accepted as solutions to the distributed computing problem. Several Grid projects have provided software in the last years. Among of all them, the LCG - especially aimed at HEP applications - provides a set of services and respective client interfaces, both in the form of command line tools as well as programming...Go to contribution page
-
P. Cerello (INFN Torino)29/09/2004, 10:00Breast cancer screening programs require managing and accessing a huge amount of data, intrinsically distributed, as they are collected in different Hospitals. The development of an application based on Computer Assisted Detection algorithms for the analysis of digitised mammograms in a distributed environment is a typical GRID use case. In particular, AliEn (ALICE Environment)...Go to contribution page
-
O. SMIRNOVA (Lund University, Sweden)29/09/2004, 10:00The Nordic Grid facility (NorduGrid) came into production operation during the summer of 2002 when the Scandinavian Atlas HEP group started to use the Grid for the Atlas Data Challenges and was thus the first Grid ever contributing to an Atlas production. Since then, the Grid facility has been in continuous 24/7 operation offering an increasing number of resources to a growing set of...Go to contribution page
-
E. Perez-Calle (CIEMAT)29/09/2004, 10:00Expansion of large computing fabrics/clusters throughout the world would create a need for stricter security. Otherwise any system could suffer damages such as data loss, data falsification or misuse. Perimeter security and intrusion detection system (IDS) are the two main aspects that must be taken into account in order to achieve system security. The main target of an intrusion...Go to contribution page
-
F. Furano (INFN Padova)29/09/2004, 10:00This paper describes XTNetFile, the client side of a project conceived to address the high demand data access needs of modern physics experiments such as BaBar using the ROOT framework. In this context, a highly scalable and fault tolerant client/server architecture for data access has been designed and deployed which allows thousands of batch jobs and interactive sessions to...Go to contribution page
-
Stan Williams (HP)29/09/2004, 11:00Today's computers are roughly a factor of one billion less efficient at doing their job than the laws of fundamental physics state that they could be. How much of this efficiency gain will we actually be able to harvest? What are the biggest obstacles to achieving many orders of magnitude improvement in our computing hardware, rather that the roughly factor of two we are used to...Go to contribution page
-
J. ROESE29/09/2004, 11:30Today and in the future businesses need an intelligent network. And Enterasys has the smarter solution. Our active network uses a combination of context-based and embedded security technologies - as well as the industryโs first automated response capability - so it can manage who is using your network. Our solution also protects the entire enterprise - from the edge, through the...Go to contribution page
-
Dave McQueeney (IBM)29/09/2004, 12:00The Global Technology Outlook (GTO) is IBM Researchโs projection of the future for information technology (IT). The GTO identifies progress and trends in key indicators such as raw computing speed, bandwidth, storage, software technology, and business modeling. These new technologies have the potential to radically transform the performance and utility of tomorrow's information processing...Go to contribution page
-
D. Smith (STANFORD LINEAR ACCELERATOR CENTER)29/09/2004, 14:00Track 5 - Distributed Computing Systems and Experiencesoral presentationfor the BaBar Computing Group. The analysis of the BaBar experiment requires many times the measured data to be produced in simulation. This requirement has resulted in one of the largest distributed computing projects ever completed. The latest round of simulation for BaBar started in early 2003, and completed in early 2004, and encompassed over 1 million jobs, and over 2.2...Go to contribution page
-
S. NAQVI (TELECOM PARIS)29/09/2004, 14:00In the evolution of computational grids, security threats were overlooked in the desire to implement a high performance distributed computational system. But now the growing size and profile of the grid require comprehensive security solutions as they are critical to the success of the endeavour. A comprehensive security system, capable of responding to any attack on grid resources, is...Go to contribution page
-
Maria Girone29/09/2004, 14:00This presentation will summarise the deployment experience gained with POOL during the first larger LHC experiments data challenges performed. In particular we discuss the storage access performance and optimisations, the integration issues with grid middleware services such as the LCG Replica Location Service (RLS) and the LCG Replica Manager and experience with the POOL proposed...Go to contribution page
-
R. Itoh (KEK)29/09/2004, 14:00A sizeable increase in the machine luminosity of KEKB accelerator is expected in coming years. This may result in a shortage in the data storage resource for the Belle experiment in the near future and it is desired to reduce the data flow as much as possible before writing the data to the storage device. For this purpose, a realtime event reconstruction farm has been installed in...Go to contribution page
-
F. Gaede (DESY IT)29/09/2004, 14:00LCIO is a persistency framework and data model for the next linear collider. Its original implementation, as presented at CHEP 2003, was focused on simulation studies. Since then the data model has been extended to also incorporate prototype test beam data, reconstruction and analysis. The design of the interface has also been simplified. LCIO defines a common abstract user...Go to contribution page
-
T. Smith (CERN)29/09/2004, 14:00This paper discusses the challenges in maintaining a stable Managed Storage Service for users built upon dynamic underlying disk and tape layers. Early in 2004 the tools and techniques used to manage disk, tape, and stage servers were refreshed in adopting the QUATTOR tool set. This has markedly increased the coherency and efficiency of the configuration of data servers. The LEMON...Go to contribution page
-
Prof. A. Rimoldi (PAVIA UNIVERSITY & INFN)29/09/2004, 14:00The simulation for the ATLAS experiment is presently operational in a full OO environment and it is presented here in terms of successful solutions to problems dealing with application in a wide community using a common framework. The ATLAS experiment is the perfect scenario where to test all applications able to satisfy the different needs of a big community. Following a well stated...Go to contribution page
-
M. Stavrianakou (FNAL)29/09/2004, 14:20The CMS detector simulation package, OSCAR, is based on the Geant4 simulation toolkit and the CMS object-oriented framework for simulation and reconstruction. Geant4 provides a rich set of physics processes describing in detail electro-magnetic and hadronic interactions. It also provides the tools for the implementation of the full CMS detector geometry and the interfaces required for...Go to contribution page
-
E. Laure (CERN)29/09/2004, 14:20The aim of the EGEE (Enabling Grids for E-Science in Europe) is to create a reliable and dependable European Grid infrastructure for e-Science. The objective of the Middleware Re-engineering and Integration Research Activity is to provide robust middleware components, deployable on several platforms and operating systems, corresponding to the core Grid services for resource access, data...Go to contribution page
-
D. Skow (FERMILAB)29/09/2004, 14:20There have been a number of efforts to develop use cases for the Grid to guide development and useability testing. This talk examines the value of "mis-use cases" for guiding the development of operational controls and error handling. A couple of the more common current network attack patterns will be extrapolated to a global Grid environment. The talk will walk through the various...Go to contribution page
-
D. Duellmann (CERN IT/DB & LCG POOL PROJECT)29/09/2004, 14:20The LCG POOL project is now entering the third year of active development. The basic functionality of the project is provided but some functional extensions will move into the POOL system this year. This presentation will give a summary of the main functionality provided by POOL, which used in physics productions today. We will then present the design and implementation of the main new...Go to contribution page
-
29/09/2004, 14:20Track 5 - Distributed Computing Systems and Experiencesoral presentationThe CMS 2004 Data Challenge (DC04) was devised to test several key aspects of the CMS Computing Model in three ways: by trying to sustain a 25 Hz reconstruction rate at the Tier-0; by distributing the reconstructed data to six Tier-1 Regional Centers (FNAL in US, FZK in Germany, Lyon in France, CNAF in Italy, PIC in Spain, RAL in UK) and handling catalogue issues; by redistributing...Go to contribution page
-
M. Richter (Department of Physics and Technology, University of Bergen, Norway)29/09/2004, 14:20The ALICE experiment at LHC will implement a High Level Trigger System, where the information from all major detectors are combined, including the TPC, TRD, DIMUON, ITS etc. The largest computing challenge is imposed by the TPC, requiring realtime pattern recognition. The main task is to reconstruct the tracks in the TPC, and in a final stage combine the tracking information from all...Go to contribution page
-
A. Moibenko (FERMI NATIONAL ACCELERATOR LABORATORY, USA)29/09/2004, 14:20Fermilab has developed and successively uses Enstore Data Storage System. It is a primary data store for the Run II Collider Experiments, as well as for the others. It provides data storage in robotic tape libraries according to requirements of the experiments. High fault tolerance and availability, as well as multilevel priority based request processing allows experiments to effectively...Go to contribution page
-
A. Klimentov (A)29/09/2004, 14:40Track 5 - Distributed Computing Systems and Experiencesoral presentationAMS-02 Computing and Ground Data Handling. V.Choutko (MIT, Cambridge), A.Klimentov (MIT, Cambridge) and M.Pohl (Geneva University) AMS (Alpha Magnetic Spectrometer) is an experiment to search in space for dark matter and antimatter on the International Space Station (ISS). The AMS detector had a precursor flight in 1998 (STS- 91, June 2-12, 1998)....Go to contribution page
-
H. Meinhard (CERN-IT)29/09/2004, 14:40By 2008, the T0/T1 centre for the LHC at CERN is estimated to use about 5000 TB of disk storage. This is a very significant increase over the about 250 TB running now. In order to be affordable, the chosen technology must provide the required performance and at the same time be cost-effective and easy to operate and use. We will present an analysis of the cost (both in terms of...Go to contribution page
-
P. Sheldon (VANDERBILT UNIVERSITY)29/09/2004, 14:40The BTeV experiment, a proton/antiproton collider experiment at the Fermi National Accelerator Laboratory, will have a trigger that will perform complex computations (to reconstruct vertices, for example) on every collision (as opposed to the more traditional approach of employing a first level hardware based trigger). This trigger requires large-scale fault adaptive embedded software: ...Go to contribution page
-
Giacomo Govi29/09/2004, 14:40The POOL software package has been successfully integrated with the three large experiment software frameworks of ATLAS, CMS and LHCb. This presentation will summarise the experience gained during these integration efforts and will try to highlight the commonalities and the main differences between the integration approaches. In particular weโll discuss the role of the POOL object cache,...Go to contribution page
-
C. Steenberg (California Institute of Technology)29/09/2004, 14:40Clarens enables distributed, secure and high-performance access to the worldwide data storage, compute, and information Grids being constructed in anticipation of the needs of the Large Hadron Collider at CERN. We report on the rapid progress in the development of a second server implementation in the Java language, the evolution of a peer-to-peer network of Clarens servers, and general...Go to contribution page
-
A. Gheata (CERN)29/09/2004, 14:40The current major detector simulation programs, i.e. GEANT3, GEANT4 and FLUKA have largely incompatible environments. This forces the physicists willing to make comparisons between the different transport Monte Carlos to develop entirely different programs. Moreover, migration from one program to the other is usually very expensive, in manpower and time, for an experiment offline...Go to contribution page
-
M. Cardenas Montes (CIEMAT)29/09/2004, 14:40Implementing strategies for secured access to widely accessible clusters is a basic requirement of these services, in particular if GRID integration is sought for. This issue has two complementary lines to be considered: security perimeter and intrusion detection systems. In this paper we address aspects of the second one. Compared to classical intrusion detection mechanisms, close...Go to contribution page
-
S. Wiesand (DESY)29/09/2004, 15:0064-Bit commodity clusters and farms based on AMD technology meanwhile have been proven to achieve a high computing power in many scientific applications. This report first gives a short introduction into the specialties of the amd64 architecture and the characteristics of two-way Opteron systems. Then results from measuring the performance and the behavior of such systems in various...Go to contribution page
-
R. Panse (KIRCHHOFF INSTITUTE FOR PHYSICS - UNIVERSITY OF HEIDELBERG)29/09/2004, 15:00Super-computers will be replaced more and more by PC cluster systems. Also future LHC experiments will use large PC clusters. These clusters will consist of off-the-shelf PCs, which in general are not built to run in a PC farm. Configuring, monitoring and controlling such clusters requires a serious amount of time consuming and administrative effort. We propose a cheap and easy...Go to contribution page
-
A. Fanfani (INFN-BOLOGNA (ITALY))29/09/2004, 15:00Track 5 - Distributed Computing Systems and Experiencesoral presentationIn March-April 2004 the CMS experiment undertook a Data Challenge(DC04). During the previous 8 months CMS undertook a large simulated event production. The goal of the challenge was to run CMS reconstruction for sustained period at 25Hz input rate, distribute the data to the CMS Tier-1 centers and analyze them at remote sites. Grid environments developed in Europe by the LHC...Go to contribution page
-
Birger KOBLITZ (CERN)29/09/2004, 15:00The ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end...Go to contribution page
-
M. POTEKHIN (BROOKHAVEN NATIONAL LABORATORY)29/09/2004, 15:00The STAR Collaboration is currently using simulation software based on Geant 3. The emergence of the new Monte Carlo simulation packages, coupled with evolution of both STAR detector and its software, requires a drastic change of the simulation framework. We see the Virtual Monte Carlo (VMC) approach as providing a layer of abstraction that facilitates such transition. The VMC...Go to contribution page
-
P. Canal (FERMILAB)29/09/2004, 15:00Since version 3.05/02, the ROOT I/O System has gone through significant enhancements. In particular, the STL container I/O has been upgraded to support splitting, reading without existing libraries and using directly from TTreeFormula (TTree queries). This upgrade to the I/O system is such that it can be easily extended (even by the users) to support the splitting and querying of...Go to contribution page
-
M. Branco (CERN)29/09/2004, 15:00In a resource-sharing environment on the grid both grid users and grid production managers call for security and data protection from unauthorized access. To secure data management several novel grid technologies were introduced in ATLAS data management. Our presentation will review new grid technologies introduced in HEP production environment for database access through the Grid...Go to contribution page
-
S. Jarp (CERN)29/09/2004, 15:20For the last 18 months CERN has collaborated closely with several industrial partners to evaluate, through the opencluster project, technology that may (and hopefully will) play a strong role in the future computing solutions, primarily for LHC but possibly also for other HEP computing environments. Unlike conventional field testing where solutions from industry are evaluated rather...Go to contribution page
-
Rob KENNEDY (FNAL)29/09/2004, 15:20Track 5 - Distributed Computing Systems and Experiencesoral presentationMost of the simulated events for the DZero experiment at Fermilab have been historically produced by the โremoteโ collaborating institutions. One of the principal challenges reported concerns the maintenance of the local software infrastructure, which is generally different from site to site. As the understanding of the community on distributed computing over distributively owned and...Go to contribution page
-
F. Rademakers (CERN)29/09/2004, 15:20The ALICE experiment and the ROOT team have developed a Grid-enabled version of PROOF that allows efficient parallel processing of large and distributed data samples. This system has been integrated with the ALICE-developed AliEn middleware. Parallelism is implemented at the level of each local cluster for efficient processing and at the Grid level, for optimal workload management of...Go to contribution page
-
A. McNab (UNIVERSITY OF MANCHESTER)29/09/2004, 15:20We describe the GridSite authorization system, developed by GridPP and the EU DataGrid project for access control in High Energy Physics grid environments with distributed virtual organizations. This system provides a general toolkit of common functions, including the evaluation of access policies (in GACL or XACML), the manipulation of digital credentials (X.509, GSI Proxies or VOMS...Go to contribution page
-
A. Campbell (DESY)29/09/2004, 15:20We present the scheme in use for online high level filtering, event reconstruction and classification in the H1 experiment at HERA since 2001. The Data Flow framework ( presented at CHEP2001 ) will be reviewed. This is based on CORBA for all data transfer, multi-threaded C++ code to handle the data flow and synchronisation and fortran code for reconstruction and event selection. A...Go to contribution page
-
O. van der Aa (INSTITUT DE PHYSIQUE NUCLEAIRE, UNIVERSITE CATHOLIQUE DE LOUVAIN)29/09/2004, 15:20The observation of Higgs bosons predicted in supersymmetric theories will be a challenging task for the CMS experiment at the LHC, in particular for its High Level trigger (HLT). A prototype of the High Level Trigger software to be used in the filter farm of the CMS experiment and for the filtering of monte carlo samples will be presented. The implemented prototype heavily uses...Go to contribution page
-
S. Linev (GSI)29/09/2004, 15:20Till now, ROOT objects can be stored only in a binary ROOT specific file format. Without the ROOT environment the data stored in such files are not directly accessible. Storing objects in XML format makes it easy to view and edit (with some restriction) the object data directly. It is also plausible to use XML as exchange format with other applications. Therefore XML streaming has been...Go to contribution page
-
Dr N. Konstantinidis (UNIVERSITY COLLEGE LONDON)29/09/2004, 15:40We present a set of algorithms for fast pattern recognition and track reconstruction using 3D space points aimed for the High Level Triggers (HLT) of multi-collision hadron collider environments. At the LHC there are several interactions per bunch crossing separated along the beam direction, z. The strategy we follow is to (a) identify the z-position of the interesting interaction...Go to contribution page
-
A. Heiss (FORSCHUNGSZENTRUM KARLSRUHE)29/09/2004, 15:40Distributed physics analysis techniques as provided by the rootd and proofd concepts require a fast and efficient interconnect between the nodes. Apart from the required bandwidth the latency of message transfers is important, in particular in environments with many nodes. Ethernet is known to have large latencies, between 30 and 60 micro seconds for the common Giga-bit Ethernet. The...Go to contribution page
-
T. Shears (University of Liverpool)29/09/2004, 15:40The Level 1 and High Level triggers for the LHCb experiment are software triggers which will be implemented on a farm of about 1800 CPUs, connected to the detector read-out system by a large Gigabit Ethernet LAN with a capacity of 8 Gigabyte/s and some 500 Gigabit Ethernet links. The architecture of the readout network must be designed to maximise data throughput, control data flow,...Go to contribution page
-
T. Barrass (CMS, UNIVERSITY OF BRISTOL)29/09/2004, 15:40CMS currently uses a number of tools to transfer data which, taken together, form the basis of a heterogenous datagrid. The range of tools used, and the directed, rather than optimised nature of CMS recent large scale data challenge required the creation of a simple infrastructure that allowed a range of tools to operate in a complementary way. The system created comprises a...Go to contribution page
-
A. Peters (ce)29/09/2004, 15:40Track 5 - Distributed Computing Systems and Experiencesoral presentationDuring the first half of 2004 the ALICE experiment has performed a large distributed computing exercise with two major objectives: to test the ALICE computing model, included distributed analysis, and to provide data sample for a refinement of the ALICE Jet physics Monte-Carlo studies. Simulation reconstruction and analysis of several hundred thousand events were performed, using the...Go to contribution page
-
T. Johnson (SLAC)29/09/2004, 15:40The FreeHEP Java library contains a complete implementation of Root IO for Java. The library uses the "Streamer Info" embedded in files created by Root 3.x to dynamically create high performance Java proxies for Root objects, making it possible to read any Root file, including files with user defined objects. In this presentation we will discuss the status of this code, explain its...Go to contribution page
-
M. Crawford (FERMILAB)29/09/2004, 16:30As an underpinning of AFS and Windows 2000, and as a formally proven security protocol in its own right, Kerberos is ubiquitous among HEP sites. Fermilab and users from other sites have taken advantage of this and built a diversity of distributed applications over Kerberos v5. We present several projects in which this security infrastructure has been leveraged to meet the requirements of...Go to contribution page
-
J-D. Durand (CERN)29/09/2004, 16:30The Cern Advanced STORage (CASTOR) system is a scalable high throughput hierarchical storage system developed at CERN. CASTOR was first deployed for full production use in 2001 and has expanded to now manage around two PetaBytes and almost 20 million files. CASTOR is a modular system, providing a distributed disk cache, a stager, and a back end tape archive, accessible via a global...Go to contribution page
-
C. Jones (CORNELL UNIVERSITY)29/09/2004, 16:30HEP analysis is an iterative process. It is critical that in each iteration the physicist's analysis job accesses the same information as previous iterations (unless explicitly told to do otherwise). This becomes problematic after the data has been reconstructed several times. In addition, when starting a new analysis, physicists normally want to use the most recent version of...Go to contribution page
-
29/09/2004, 16:30SAM was developed as a data handling system for Run II at Fermilab. SAM is a collection of services, each described by metadata. The metadata are modeled on a relational database, and implemented in ORACLE. SAM, originally deployed in production for the D0 Run II experiment, has now been also deployed at CDF and is being commissioned at MINOS. This illustrates that the metadata...Go to contribution page
-
Manuel Dias-Gomez (University of Geneva, Switzerland)29/09/2004, 16:30The ATLAS experiment at the Large Hadron Collider (LHC) will face the challenge of efficiently selecting interesting candidate events in pp collisions at 14 TeV center- of-mass energy, whilst rejecting the enormous number of background events, stemming from an interaction rate of about 10^9 Hz. The Level-1 trigger will reduce the incoming rate to around O(100 kHz). Subsequently, the...Go to contribution page
-
V. Gyurjyan (Jefferson Lab)29/09/2004, 16:30A general overview of the Jefferson Lab data acquisition run control system is presented. This run control system is designed to operate the configuration, control, and monitoring of all Jefferson Lab experiments. It controls data-taking activities by coordinating the operation of DAQ sub-systems, online software components and third-party software such as external slow control...Go to contribution page
-
J. Closier (CERN)29/09/2004, 16:30Track 5 - Distributed Computing Systems and Experiencesoral presentationThe LHCb experiment performed its latest Data Challenge (DC) in May-July 2004. The main goal was to demonstrate the ability of the LHCb grid system to carry out massive production and efficient distributed analysis of the simulation data. The LHCb production system called DIRAC provided all the necessary services for the DC: Production and Bookkeeping Databases, File catalogs, Workload...Go to contribution page
-
M. Mambelli (UNIVERSITY OF CHICAGO)29/09/2004, 16:50Track 5 - Distributed Computing Systems and Experiencesoral presentationWe describe the design and operational experience of the ATLAS production system as implemented for execution on Grid3 resources. The execution environment consisted of a number of grid-based tools: Pacman for installation of VDT-based Grid3 services and ATLAS software releases, the Capone execution service built from the Chimera/Pegasus virtual data system for directed acyclic graph...Go to contribution page
-
S. Albrand (LPSC)29/09/2004, 16:50The ATLAS Metadata Interface (AMI) project provides a set of generic tools for managing database applications. AMI has a three-tier architecture with a core that supports a connection to any RDBMS using JDBC and SQL. The middle layer assumes that the databases have an AMI compliant self-describing structure. It provides a generic web interface and a generic command line interface. The...Go to contribution page
-
G. GANIS (CERN)29/09/2004, 16:50The new authentication and security services available in the ROOT framework for client/server applications will be described. The authentication scheme has been designed with the purpose to make the system complete and flexible, to fit the needs of the coming clusters and facilities. Three authentication methods have been made available: Globus/GSI, for GRID-awareness; SSH, to allow...Go to contribution page
-
P. Fuhrmann (DESY)29/09/2004, 16:50The dCache software system has been designed to manage a huge amount of individual disk storage nodes and let them appear under a single file system root. Beside a variety of other features, it supports the GridFtp dialect, implements the Storage Resource Manager interface (SRM V1) and can be linked against the CERN GFAL software layer. These abilities makes dCache a perfect Storage...Go to contribution page
-
Edward Moyse29/09/2004, 16:50The event data model (EDM) of the ATLAS experiment is presented. For large collaborations like the ATLAS experiment common interfaces and data objects are a necessity to insure easy maintenance and coherence of the experiments software platform over a long period of time. The ATLAS EDM improves commonality across the detector subsystems and subgroups such as trigger, test beam...Go to contribution page
-
E. Neilsen (FERMI NATIONAL ACCELERATOR LABORATORY)29/09/2004, 16:50The lattice gauge theory community produces large volumes of data. Because the data produced by completed computations form the basis for future work, the maintenance of archives of existing data and metadata describing the provenance, generation parameters, and derived characteristics of that data is essential not only as a reference, but also as a basis for future work. Development of...Go to contribution page
-
F. Carena (CERN)29/09/2004, 16:50The Experiment Control System (ECS) is the top level of control of the ALICE experiment. Running an experiment implies performing a set of activities on the online systems that control the operation of the detectors. In ALICE, online systems are the Trigger, the Detector Control Systems (DCS), the Data-Acquisition System (DAQ) and the High-Level Trigger (HLT). The ECS provides a...Go to contribution page
-
G. Carcassi (BROOKHAVEN NATIONAL LABORATORY)29/09/2004, 17:10We present a work-in-progress system, called GUMS, which automates the processes of Grid user registration and management and supports policy-aware authorization at well. GUMS builds on existing VO management tools (LDAP VO, VOMS and VOMRS) with a local grid user management system and a site database which stores user credentials, accounting history and policies in XML format. We use...Go to contribution page
-
M. Case (UNIVERSITY OF CALIFORNIA, DAVIS)29/09/2004, 17:10The CMS Detector Description Database (DDD) consists of a C++ API and an XML based detector description language. DDD is used by the CMS simulation (OSCAR), reconstruction (ORCA), and visualization (IGUANA) as well by test beam software that relies on those systems. The DDD is a sub-system within the COBRA framework of the CMS Core Software. Management of the XML is currently done using a...Go to contribution page
-
D. Liko (CERN)29/09/2004, 17:10The unprecedented size and complexity of the ATLAS TDAQ system requires a comprehensive and flexible control system. Its role ranges from the so-called run-control, e.g. starting and stopping the datataking, to error handling and fault tolerance. It also includes intialisation and verification of the overall system. Following the traditional approach a hierachical system of...Go to contribution page
-
Dr M. Steinke (Ruhr Universitaet Bochum)29/09/2004, 17:10In the past year, BaBar has shifted from using Objectivity to using ROOT I/O as the basis for our primary event store. This shift required a total reworking of Kanga, our ROOT-based data storage format. We took advantage of this opportunity to ease the use of the data by supporting multiple access modes that make use of many of the analysis tools available in ROOT. Specifically, our...Go to contribution page
-
Richard Mount (SLAC)29/09/2004, 17:10
-
29/09/2004, 17:10Track 5 - Distributed Computing Systems and Experiencesoral presentationThis talk describes the various stages of ATLAS Data Challenge 2 (DC2) in what concerns usage of resources deployed via NorduGrid's Advanced Resource Connector (ARC). It also describes the integration of these resources with the ATLAS production system using the Dulcinea executor. ATLAS Data Challenge 2 (DC2), run in 2004, was designed to be a step forward in the distributed data...Go to contribution page
-
T. Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY)29/09/2004, 17:10Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard allows independent institutions to implement their own SRMs, thus allowing for a uniform access to heterogeneous storage...Go to contribution page
-
A. Amorim (FACULTY OF SCIENCES OF THE UNIVERSITY OF LISBON)29/09/2004, 17:30The size and complexity of the present HEP experiments represents an enormous effort in the persistency of data. These efforts imply a tremendous investment in the databases field not only for the event data but also for data that is needed to qualify this one - the Conditions Data. In the present document we'll describe the strategy for addressing the Conditions data problem in the...Go to contribution page
-
G. Watts (UNIVERSITY OF WASHINGTON)29/09/2004, 17:30The DZERO Collider Expermient logs many of its Data Aquisition Monitoring Information in long term storage. This information is most frequently used to understand shift history and efficiency. Approximately two kilobytes of information is stored every 15 second. We describe this system and the web interface provided. The current system is distributed, running on Linux for the back end...Go to contribution page
-
Y. Iida (HIGH ENERGY ACCELERATOR RESEARCH ORGANIZATION)29/09/2004, 17:30The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity now exceeds 800pb-1. These numbers correspond to more than 1PB of raw and processed data stored on tape and an accumulation of the raw data at the rate of 1TB/day. The processed, compactified data, together with Monte Carlo simulation data for the final physics...Go to contribution page
-
Ian FISK (FNAL)29/09/2004, 17:30Current grid development projects are being designed such that they require end users to be authenticated under the auspices of a "recognized" organization, called a Virtual Organization (VO). A VO must establish resource-usage agreements with grid resource providers. The VO is responsible for authorizing its members for grid computing privileges. The individual sites and resources...Go to contribution page
-
Dr S. Wynhoff (PRINCETON UNIVERSITY)29/09/2004, 17:30We report on the software for Object-oriented Reconstruction for CMS Analysis, ORCA. It is based on the Coherent Object-oriented Base for Reconstruction, Analysis and simulation (COBRA) and used for digitization and reconstruction of simulated Monte-Carlo events as well as testbeam data. For the 2004 data challenge the functionality of the software has been extended to store...Go to contribution page
-
I. Gaponenko (LAWRENCE BERKELEY NATIONAL LABORATORY)29/09/2004, 17:50A new, completely redesigned Condition/DB was deployed in BaBar in October 2002. It replaced the old database software used through the first three and half years of data taking. The new software aims at performance and scalability limitations of the original database. However this major redesign brought in a new model of the metadata, brand new technology- and implementation-...Go to contribution page
-
29/09/2004, 17:50A key feature of Grid systems is the sharing of its resources among multiple Virtual Organizations (VOs). The sharing process needs a policy framework to manage the resource access and usage. Generally Policy frameworks exist for farms or local systems only, but now, for Grid environments, a general, and distributed policy system is necessary. Generally VOs and local systems have...Go to contribution page
-
Dr J. Katzy (DESY, HAMBURG)29/09/2004, 17:50During the years 2000 and 2001 the HERA machine and the H1 experiment performed substantial luminosity upgrades. To cope with the increased demands on data handling an effort was made to redesign and modernize the analysis software. Main goals were to lower turn-around time for physics analysis by providing a single framework for data storage, event selection, physics analysis and...Go to contribution page
-
L. Magnoni (INFN-CNAF)29/09/2004, 17:50Within a Grid the possibility of managing storage space is fundamental, in particular, before and during application execution. On the other hand, the increasing availability of highly performant computing resources raises the need for fast and efficient I/O operations and drives the development of parallel distributed file systems able to satisfy these needs granting access to distributed...Go to contribution page
-
L. Abadie (CERN)29/09/2004, 17:50The aim of the LHCb configuration database is to store all the controllable devices of the detector. The experimentโs control system (that uses PVSS) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to rapidly store and retrieve huge amounts...Go to contribution page
-
T.M. Steinbeck (KIRCHHOFF INSTITUTE OF PHYSICS, RUPRECHT-KARLS-UNIVERSITY HEIDELBERG, for the Alice Collaboration)29/09/2004, 18:10The Alice High Level Trigger (HLT) cluster is foreseen to consist of 400 to 500 dual SMP PCs at the start-up of the experiment. The software running on these PCs will consist of components communicating via a defined interface, allowing flexible software configurations. During Alice's operation the HLT has to be continuously active to avoid detector dead time. To ensure that the...Go to contribution page
-
C. Pruneau (WAYNE STATE UNIVERSITY)29/09/2004, 18:10We present the design and performance analysis of a new event reconstruction chain deployed for analysis of STAR data acquired during the 2004 run and beyond. The creation of this new chain involved the elimination of obsolete FORTRAN components, and the development of equivalent or superior modules written in C++. The new reconstruction chain features a new and fast TPC cluster finder,...Go to contribution page
-
A. Valassi (CERN)29/09/2004, 18:10The Conditions Database project has been launched to implement a common persistency solution for experiment conditions data in the context of the LHC Computing Grid (LCG) Persistency Framework. Conditions data, such as calibration, alignment or slow control data, are non-event experiment data characterized by the fact that they vary in time and may have different versions. The LCG...Go to contribution page
-
S. Veseli (Fermilab)29/09/2004, 18:10The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes...Go to contribution page
-
M. Paterno (FERMILAB)30/09/2004, 08:30As Fermilab's representatives to the C++ standardization effort, we have been promoting directions of special interest to the physics community. We here report on selected recent developments toward the next revision of the C++ Standard. Topics will include standardization of random number and special function libraries, as well as core language issues promoting improved run-time...Go to contribution page
-
Fabiola Gianotti (CERN)30/09/2004, 09:00The LHC Software will be confronted to unprecedented challenges as soon as the LHC will turn on. We summarize the main Software requirements coming from the LHC detectors, triggers and physics, and we discuss several examples of Software components developed by the experiments and the LCG project (simulation, reconstruction, etc.), their validation, and their adequacy for LHC physics.Go to contribution page
-
David Stickland (CERN)30/09/2004, 09:30The LHC experiments are undertaking various data-challenges in the run-up to completion of their computing models and the submission of the experiment and of the LHC Computing Grid (LCG), Technical Design Reports(TDR) in 2005. In this talk we summarize the current working models of the LHC Computing Models, identifying their similarities and differences. We summarize the results and...Go to contribution page
-
A. CERVERA VILLANUEVA (University of Geneva)30/09/2004, 10:00We have developed a c++ software package, called "RecPack", which allows the reconstruction of dynamic trajectories in any experimental setup. The basic utility of the package is the fitting of trajectories in the presence of random and systematic perturbations to the system (multiple scattering, energy loss, inhomogeneous magnetic fields, etc) via a Kalman Filter fit. It also...Go to contribution page
-
30/09/2004, 10:00Building a state of the art high energy physics detector like CMS requires strict interoperability and coherency in the design and construction of all sub-systems comprising the detector. This issue is especially critical for the many database components that are planned for storage of the various categories of data related to the construction, operation, and maintainance of the...Go to contribution page
-
F. Gray (UNIVERSITY OF CALIFORNIA, BERKELEY)30/09/2004, 10:00The muCap experiment at the Paul Scherrer Institut (PSI) will measure the rate of muon capture on the proton to a precision of 1% by comparing the apparent lifetimes of positive and negative muons in hydrogen. This rate may be related to the induced pseudoscalar weak form factor of the proton. Superficially, the muCap apparatus looks something like a miniature model of a collider...Go to contribution page
-
A. Di Meglio (CERN)30/09/2004, 10:00Software Configuration Management (SCM) Patterns and the Continuous Integration method are recent and powerful techniques to enforce a common software engineering process across large, heterogeneous, rapidly changing development projects where a rapid release lifecycle is required. In particular the Continuous Integration method allows tracking and addressing problems in the...Go to contribution page
-
W. Waltenberger (HEPHY VIENNA)30/09/2004, 10:00State of the art in the field of fitting particle tracks to one vertex is the Kalman technique. This least-squares (LS) estimator is known to be ideal in the case of perfect assignment of tracks to vertices and perfectly known Gaussian errors. Experimental data and detailed simulations always depart from this perfect model. The imperfections can be expected to be larger in high...Go to contribution page
-
30/09/2004, 10:00In addition to the well-known challenges of computing and data handling at LHC scales, LHC experiments have also approached the scalability limit of manual management and control of the steering parameters ("primary numbers") provided to their software systems. The laborious task of detector description benefits from the implementation of a scalable relational database approach. We...Go to contribution page
-
A. Undrus (BROOKHAVEN NATIONAL LABORATORY, USA)30/09/2004, 10:00Software testing is a difficult, time-consuming process that requires technical sophistication and proper planning. This is especially true for the large-scale software projects of High Energy Physics where constant modifications and enhancements are typical. The automated nightly testing is the important component of NICOS, NIghtly COntrol System, that manages the multi-platform nightly...Go to contribution page
-
M. Stoufer (LAWRENCE BERKELEY NATIONAL LAB)30/09/2004, 10:00As any software project grows in both its collaborative and mixed codebase nature, current tools like CVS and Maven start to sag under the pressure of complex sub-project dependencies and versioning. A developer-wide failure in mastery of these tools will inevitably lead to an unrecoverable instability of a project. Even keeping a single software project stable in a large collaborative...Go to contribution page
-
Mr V. Onuchin (CERN, IHEP)30/09/2004, 10:00Carrot is a scripting module for the Apache webserver. Based on the ROOT framework, it has a number of powerful features, including the ability to embed C++ code into HTML pages, run interpreted and compiled C++ macros, send and execute C++ code on remote web servers, browse and analyse the remote data located in ROOT files with the web browser, access and manipulate databases, and...Go to contribution page
-
A. Zaytsev (BUDKER INSTITUTE OF NUCLEAR PHYSICS)30/09/2004, 10:00CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are study of known and search for new vector mesons, study of the ppbar a nnbar production cross sections in the vicinity of the threshold and...Go to contribution page
-
K. Rabbertz (UNIVERSITY OF KARLSRUHE)30/09/2004, 10:00For data analysis in an international collaboration it is important to have an efficient procedure to distribute, install and update the centrally maintained software. This is even more true when not only locally but also grid accessible resources are to be exploited. A practical solution will be presented that has been successfully employed for CMS software installations on systems...Go to contribution page
-
M.S. Mennea (UNIVERSITY & INFN BARI)30/09/2004, 10:00This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this subdetector (more than 50 millions channels organized in 17000 modules each one of these being a complete detector), the standard CMS visualisation tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and...Go to contribution page
-
M.G. Pia (INFN GENOVA)30/09/2004, 10:00A Toolkit for Statistical Data Analysis has been recently released. Thanks to this novel software system, for the first time an ample set of sophisticated algorithms for the comparison of data distributions (goodness of fit tests) is made available to the High Energy Physics community in an open source product. The statistical algorithms implemented belong to two sets, for the...Go to contribution page
-
D. KLOSE (Universidade de Lisboa, Portugal)30/09/2004, 10:00Conditions Databases are beginning to be widely used in the ATLAS experiment. Conditions data are time-varying data describing the state of the detector used to reconstruct the event data. This includes all sorts of slowly evolving data like detector alignment, calibration, monitoring and data from Detector Control System (DCS). In this paper we'll present the interfaces between the...Go to contribution page
-
Mr W. Waltenberger (Austrian Academy of Sciences // Institute of High Energy Physics)30/09/2004, 10:00A proposal is made for the design and implementation of a detector-independent vertex reconstruction toolkit and interface to generic objects (VERTIGO). The first stage aims at re- using existing state-of-the-art algorithms for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Prototype candidates for the latter are a wide range of...Go to contribution page
-
Dr E. Chabanat (IN2P3)30/09/2004, 10:00CMS and others LHC experiments offer a new challenge for the vertex reconstruction: the elaboration of efficient algorithms at high-luminosity beam collisions. We present here a new algorithm in the vertex finding field : Deterministic Annealing (DA). This algorithm comes from information theory by analogy to statistical physics and has already been used in clustering and classification...Go to contribution page
-
30/09/2004, 10:00A simultaneous track finding / fitting procedure based on Kalman filtering approach has been developed for the forward muon spectrometer of ALICE experiment. In order to improve the performance of the method in high-background conditions of the heavy ion collisions the "canonical" Kalman filter has been modified and supplemented by a "smoother" part. It is shown that the resulting...Go to contribution page
-
V M. Moreira do Amaral (UNIVERSITY OF MANNHEIM)30/09/2004, 10:00There is a permanent quest for user friendliness in HEP Analysis. This growing need is directly proportional to the analysis frameworks' interface complexity. In fact, the user is provided with an analysis framework that makes use of a General Purpose Language to program the query algorithms. Usually the user finds this overwhelming, since he or she is presented with the complexity of...Go to contribution page
-
Dr S. Pardi (DIPARTIMENTO DI MATEMATICA ED APPLICAZIONI "R.CACCIOPPOLI")30/09/2004, 10:00The algorithms for the detection of gravitational waves are usually very complex due to the low signal to noise ratio. In particular the search for signals coming from coalescing binary systems can be very demanding in terms of computing power, like in the case of the classical Standard Matched Filter Technique. To overcome this problem, we tested a Dynamic Matched Filter Technique,...Go to contribution page
-
M.G. Pia (INFN GENOVA)30/09/2004, 10:00The adoption of a rigorous software process is well known to represent a key factor for the quality of the software product and the most effective usage of the human resources available to a software project. The Unified Process, in particular its commercial packaging known as the RUP (Rational Unified Process) has been one of the most widely used software process models in the...Go to contribution page
-
B. White (STANFORD LINEAR ACCELERATOR CENTER (SLAC))30/09/2004, 10:00The Electron Gamma Shower (EGS) Code System at SLAC is designed to simulate the flow of electrons, positrons and photons through matter at a wide range of energies. It has a large user base among the high-energy physics community and is often used as a teaching tool through a Web interface that allows program input and output. Our work aims to improve the user interaction and shower...Go to contribution page
-
S. Guatelli (INFN Genova, Italy)30/09/2004, 10:00The study of the effects of space radiation on astronauts in an important concern of space missions for the exploration of the Solar System. The radiation hazard to crew is critical to the feasibility of interplanetary manned missions. To protect the crew, shielding must be designed, the environment must be anticipated and monitored, and a warning system must be put in place. A...Go to contribution page
-
J. Hrivnac (LAL)30/09/2004, 10:00GraXML is the framework for manipulation and visualization of 3D geometrical objects in space. The full framework consists of the GraXML toolkit, libraries implementing Generic and Geometric Models and end-user interactive front-ends. GraXML Toolkit provides a foundation for operations on 3D objects (both detector elements and events). Each external source of 3D data is...Go to contribution page
-
A. Valassi (CERN)30/09/2004, 10:00The migration of the Harp data and software from an Objectivity- based to an Oracle-based data storage solution is reviewed in this presentation. The project, which was successfully completed in January 2004, involved three distinct phases. In the first phase, which profited significantly from the previous COMPASS data migration project, 30 TB of Harp raw event data were migrated in...Go to contribution page
-
T. Baron (CERN)30/09/2004, 10:00CHEP 2004 conference is using the Integrated Digital Conferencing product to manage part of its web site and processes to run the conference. This software has been built in the framework of InDiCo European Project. It is designed to be generic and extensible with the goal of providing help for single seminars as well as large conferences management. Partly developped at CERN within...Go to contribution page
-
E. Poinsignon (CERN)30/09/2004, 10:00The External Software Service of the LCG SPI project provides open source and public domain packages required by the LCG projects and experiments. Presently, more than 50 libraries and tools are provided for a set of platforms decided by the architect forum. All packages are installed following a standard procedure and are documented on the web. A set of scripts has been developed...Go to contribution page
-
M. Stavrianakou (FNAL)30/09/2004, 10:00The CMS Geant4-based Simulation Framework, Mantis, is a specialization of the COBRA framework, which implements the CMS OO architecture. Mantis, which is the basis for the CMS-specific simulation program OSCAR, provides the infrastructure for the selection, configuration and tuning of all essential simulation elements: geometry construction, sensitive detector and magnetic field...Go to contribution page
-
N. Graf (SLAC)30/09/2004, 10:00We discuss techniques used to access legacy event generators from modern simulation environments. Examples will be given of our experience within the linear collider community accessing various FORTRAN-based generators from within a Java environment. Coding to a standard interface and use of shared object libraries enables runtime selection of generators, and allows for extension of...Go to contribution page
-
Dr M. Biglietti (UNIVERSITY OF MICHIGAN)30/09/2004, 10:00At LHC the 40 MHz bunch crossing rate dictates a high selectivity of the ATLAS Trigger system, which has to keep the full physics potential of the experiment in spite of a limited storage capability. The level-1 trigger, implemented in a custom hardware, will reduce the initial rate to 75 kHz and is followed by the software based level-2 and Event Filter, usually referred as High Level...Go to contribution page
-
A. Schmidt (Institut fuer Experimentelle Kernphysik, Karlsruhe University, Germany)30/09/2004, 10:00At CHEP03 we introduced "Physics Analysis eXpert" (PAX), a C++ toolkit for advanced physics analyses in High Energy Physics (HEP) experiments. PAX introduces a new level of abstraction beyond detector reconstruction and provides a general, persistent container model for HEP events. Physics objects like fourvectors, vertices and collisions can easiliy be stored, accessed and manipulated....Go to contribution page
-
O. Link (CERN, PH/SFT)30/09/2004, 10:00Twisted trapezoids are important compontents in the LAr end cap calorimeter of the Atlas detector. A similar solid, the so-called twisted tubs consists of two end planes, inner and outer hyperboloidal surfaces, and twisted surfaces, and is an indispensable component for cylindrical drift chambers (see K. Hoshina et al, Computer Physics Communications 153 (2003) 373-391). In Geant3...Go to contribution page
-
G B. Barrand (CNRS / IN2P3 / LAL)30/09/2004, 10:00OpenPAW is for people that definitively do not want to quit the PAW command prompt, but seek anyway an implementation based over more modern technologies. We shall present the OpenScientist/Lab/opaw program that offers a PAW command prompt by using the OpenScientist tools (then C++, Inventor for doing graphic, Rio for doing the IO, OnX for the GUI, etc...). The OpenScientist/Lab...Go to contribution page
-
S. Schmid (ETH Zurich)30/09/2004, 10:00LHC experiments have large amounts of software to build. CMS has studied ways to shorten project build times using parallel and distributed builds as well as improved ways to decide what to rebuild. We have experimented with making idle desktop and server machines easily available as a virtual build cluster using distcc and zeroconf. We have also tested variations of ccache and more...Go to contribution page
-
C. Jones (CORNELL UNIVERSITY)30/09/2004, 10:00A common task for a reconstruction/analysis system is to be able to output different sets of events to different permanent data stores (e.g. files). This allows multiple related logical jobs to be grouped into one process and run using the same input data (read from a permanent data store and/or created from an algorithm). In our system, physicists can specify multiple output 'paths',...Go to contribution page
-
J. Hrivnac (LAL)30/09/2004, 10:00There are two kinds of analysis objects with respect to their persistent requirements: * Objects, which need direct access to the persistency service only for their IO operations (read/write/update/...): histograms, clouds, profiles, ... All Persistency requirements for those objects can be implemented by standard Transient-Persistent Separation techniques like JDO, Serialization,...Go to contribution page
-
Dr S. Cucciarelli (CERN)30/09/2004, 10:00The Pixel Detector is the innermost one in the tracking system of the Compact Muon Solenoid (CMS) experiment. It provides the most precise measurements not only supporting the full track reconstruction but also allowing the standalone reconstruction useful especially for the online event selection at High-Level Trigger (HLT). The performance of the Pixel Detector is given. The HLT...Go to contribution page
-
V. Kuznetsov (CORNELL UNIVERSITY)30/09/2004, 10:00Linux operating system has become the platform of choice in the HEP community. However, the migration process from another operating system to Linux can be a tremendous effort for developers and system administrators. The ultimate goal of such a transition is to maximize agreement between the final results of identical calculations on the different platforms. Apart from the fine tuning of...Go to contribution page
-
I. Reguero (CERN, IT DEPARTMENT), J A. Lopez-Perez (CERN, IT DEPARTMENT)30/09/2004, 10:00Our goal is two fold. On one hand we wanted to address the interest of CMS users to have LCG Physics analysis environment on Solaris. On the other hand we wanted to assess the difficulty of porting code written in Linux without particular attention to portability to other Unix implementations. Our initial assumption was that the difficulty would be manageable even for a very small team....Go to contribution page
-
M.G. Pia (INFN GENOVA)30/09/2004, 10:00The Geant4 Toolkit provides an ample set of alternative and complementary physics models to handle the electromagnetic interactions of leptons, photons, charged hadrons and ions. Because of the critical role often played by simulation in the experimental design and physics analysis, an accurate validation of the physics models implemented in Geant4 is essential, down to the...Go to contribution page
-
W. Lavrijsen (LBNL)30/09/2004, 10:00A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality....Go to contribution page
-
V. Onuchin (CERN, IHEP)30/09/2004, 10:00The RDBC (ROOT DataBase Connectivity) library is a C++ implementation of the The Java Database Connectivity Application Programming Interface. It provides a DBMS-independent interface to relational databases from ROOT as well as a generic SQL database access framework. RDBC also extends the ROOT TSQL abstract interface. Currently it is used in two large experiments: - in Minos as...Go to contribution page
-
C. ARNAULT (CNRS)30/09/2004, 10:00Since its introduction in 1999, CMT is now used as a production tool in many large software projects for physics research (ATLAS, LHCb, Virgo, Auger, Planck). Although its basic concepts remain unchanged since the beginning, proving their viability, it is still improving and increasing its coverage of the configuration management mechanisms. Two important evolutions have recently been...Go to contribution page
-
G B. Barrand (CNRS / IN2P3 / LAL)30/09/2004, 10:00Rio (for ROOT IO) is a rewriting of the file IO system of ROOT. We shall present our strong motivations of doing this tedious work. We shall present the main choices done in the Rio implementation (then by opposition to what we don't like in ROOT). For example, we shall say why we believe that an IO package is not a drawing package (no TClass::Draw) ; why someone should use...Go to contribution page
-
30/09/2004, 10:00The ROOT geometry package is a tool designed for building, browsing, tracking and visualizing a detector geometry. The code is independent from other external MC for simulation, therefore it does not contain any constraints related to physics. However, the package defines a number of hooks for tracking, such as media, materials, magnetic field or track state flags, in order to allow...Go to contribution page
-
I. Antcheva (CERN)30/09/2004, 10:00The GUI is a very important component of the ROOT framework. Its main purpose is to improve the usability and end-user perception. In this paper, we present two main projects in this direction: the ROOT graphics editor and the ROOT GUI builder. The ROOT graphics editor is a recent addition to the framework. It provides a state of the art and an intuitive way to create or edit objects...Go to contribution page
-
P. Nevski (BROOKHAVEN NATIONAL LABORATORY)30/09/2004, 10:00The ATLAS detector is a sophisticated multi-purpose detector with over 10 million electronics channels designed to study high-pT physics at LHC. Due to their high multiplicity, reaching almost hundred thousand particles per event, heavy ion collisions pose a formidable computational challenge. A set of tools have been created to realistically simulate and fully reconstruct the most...Go to contribution page
-
E. Tcherniaev (CERN)30/09/2004, 10:00This paper discusses some key points in the organization of the HARP software. In particular it describes the configuration of the packages, data and code management, testing and release procedures. Development of the HARP software is based on incremental releases with strict respect of the design structure. This poses serious challenges to the software management, which has gone...Go to contribution page
-
C. Jones (CORNELL UNIVERSITY)30/09/2004, 10:00Generic programming as exemplified by the C++ standard library makes use of functions or function objects (objects that accept function syntax) to specialize generic algorithms for particular uses. Such separation improves code reuse without sacrificing efficiency. We employed this same technique in our combinatoric engine: DChain. In DChain, physicists combine lists of child particles...Go to contribution page
-
A. Pfeiffer (CERN, PH/SFT)30/09/2004, 10:00In the context of the LHC Computing Grid (LCG) project, the Applications Area develops and maintains that part of the physics applications software and associated infrastructure that is shared among the LHC experiments. The Physicist Interface (PI) project of the LCG Application Area encompasses the interfaces and tools by which physicists will directly use the software. In...Go to contribution page
-
Mei Ye30/09/2004, 10:00This article describes the simulation of the read-out subsystem which will be subject to the BESIII data acquisition system. According to the purpose of the BESIII, the event rate will be about 4000Hz, and the data rate up to 50Mbytes/sec after Level 1 trigger. The read-out subsystem consists of some read-out crates and read-out computer whose principle function is to collect event...Go to contribution page
-
Dr G. Folger (CERN)30/09/2004, 10:00Geant4 is a toolkit for the simulation of the passage of particles through matter. Amongst its applications are hadronic calorimeters of LHC detectors and simulation of radiation environments. For these types of simulation, a good description of secondaries generated by inelastic interactions of primary nucleons and pions is particularly important. The Geant4 Binary Cascade is a...Go to contribution page
-
Dr M. Whalley (IPPP, UNIVERSITY OF DURHAM)30/09/2004, 10:00We will describe the plans and objectives of the recently funded PPARC(UK) e-science project, the Combined E-Science Data Analysis Resource for High Energy Physics (CEDAR), which will combine the strengths of the well established and widely used HEPDATA library of HEP data and the innovative JETWEB Data/Monte Carlo comparison facility built on the HZTOOL package and which exploits...Go to contribution page
-
Vakhtang tsulaia30/09/2004, 10:00The ATLAS Detector consists of several major subsytems: an inner detector composed of pixels, microstrip detectors and a transition radiation tracker; electromagnetic and hadronic calorimetry, and a muon spectrometer. Over the last year, these systems have been described in terms of a set of geometrical primitives known as GeoModel. Software components for detector description interpret...Go to contribution page
-
Dr V. Tioukov (INFN NAPOLI)30/09/2004, 10:00OPERA is a massive lead/emulsion target for a long-baseline neutrino oscillation search. More then 90% of the useful experimental data in OPERA will be produced by the scanning of emulsion plates with the automatic microscopes. The main goal of the data processing in OPERA will be the search, analysis and identification of primary and secondary vertexes produced by neutrino in...Go to contribution page
-
L. Nellen (I. DE CIENCIAS NUCLEARES, UNAM)30/09/2004, 10:00The Pierre Auger Observatory consists of two sites with several semi-autonomous detection systems. Each component, and in some cases each event, provides a preferred coordinate system for simulation and analysis. To avoid a proliferation of coordinate systems in the offline software of the Pierre Auger Observatory, we have developed a geometry package that allows the treatment of...Go to contribution page
-
Y. Perrin (CERN)30/09/2004, 10:00A web portal has been developed, in the context of the LCG/SPI project, in order to coordinate workflow and manage information in large software projects. It is a development of the GNU Savannah package and offers a range of services to every hosted project: Bug / support / patch trackers, a simple task planning system, news threads, and a download area for software releases. Features...Go to contribution page
-
R. Brun (CERN)30/09/2004, 10:00The ROOT linear algebra package has been invigorated . The hierarchical structure has been improved allowing different flavors of matrices, like dense and symmetric . A fairly complete set of matrix decompositions has been added to support matrix inversions and solving linear equations. The package has been extensively compared to other algorithms for its accuracy and...Go to contribution page
-
S. Albrand (LPSC)30/09/2004, 10:00The Tag Collector is a web interfaced database application for release management. The tool is tightly coupled to CVS, and also to CMT, the configuration management tool. Developers can interactively select the CVS tags to be included in a build, and the complete build commands are produced automatically. Other features are provided such as verification of package CMT requirements files,...Go to contribution page
-
A. Salzburger (UNIVERSITY OF INNSBRUCK)30/09/2004, 10:00The ATLAS reconstruction software requires extrapolation to arbitrary oriented surfaces of different types inside a non-uniform magnetic field. In addition multiple scattering and energy loss effects along the propagated trajectories have to be taken into account. A good performace in respect of computing time consumption is crucial due to hit and track multiplicity in high luminosity...Go to contribution page
-
D. Klose (Universidade de Lisboa, Portugal)30/09/2004, 10:00A common LCG architecture for the Conditions Database for the time evolving data enables the possibility to separate the interval-of- validity (IOV) information from the conditions data payload. The two approaches can be beneficial in different cases and separation presents challenges for efficient knowledge discovery, navigation and data visualization. In our paper we describe the...Go to contribution page
-
Dimitri gladkov30/09/2004, 10:00The design, implementation and performance of the ZEUS Global Tracking Trigger (GTT) Forward Algorithm is described. The ZEUS GTT Forward Algorithm integrates track information from the ZEUS Micro Vertex Detector (MVD) and forward Straw Tube Tracker (STT) to provide a picture of the event topology in the forward direction ($1.5<\eta <3$ ) of the ZEUS detector. This region is...Go to contribution page
-
Dr E. Gerchtein (CMU)30/09/2004, 10:00Long lived charged hyperon, $\Xi$ and $\Omega$, are capable of travelling significant distances producing hits in the silicon detector, before decaying into $\Lambda^0 \pi$ and $\Lambda^0 K$ pairs, respectively. This gives unique opportunity of reconstructiong hyperon tracks. We have developed a dedicated "outside-in" tracking algorithm that is seeded by 4-momentum and decay vertex...Go to contribution page
-
C. Leggett (LAWRENCE BERKELEY NATIONAL LABORATORY)30/09/2004, 10:00It is essential to provide users transparent access to time varying data, such as detector misalignments, calibration parameters and the like. This data should be automatically updated, without user intervention, whenever it changes. Furthermore, the user should be able to be notified whenever a particular datum is updated, so as to perform actions such as re-caching of compound results,...Go to contribution page
-
30/09/2004, 10:00Validation of hadronic physics processes of the Geant4 simulation toolkit is a very important task to ensure adequate physics results for the experiments being built at the Large Hadron Collider. We report on simulation results obtained using the Geant4 Bertini cascade double-differential production cross-sections for various target materials and incident hadron kinetic energies between...Go to contribution page
-
Mr A. Kulikov (Joint Institute for Nuclear Research, Dubna, Russia.)30/09/2004, 10:00Using the modern 3D visualization software and hardware to represent the object models of the HEP detectors would create the impressive pictures of events and the detail views of the detectors facilitating the design, simulation and data analysis and representation the huge amount of the information flooding the modern HEP experiments. In this paper we represent the work made by members...Go to contribution page
-
T. Todorov (CERN/IReS)30/09/2004, 10:00The simulation, reconstruction and analysis software access to the magnetic field has large impact both on CPU performance and on accuracy. An approach based on a volume geometry is described. The volumes are constructed in such a way that their boundaries correspond to field discontinuities, which are due to changes in magnetic permeability of the materials. The field in each...Go to contribution page
-
Bo Anders Ynnerman (Linkรถping)30/09/2004, 11:00This talk gives a brief overview of recent development of high performance computing and Grid initiatives in the Nordic region. Emphasis will be placed on the technology and policy demands posed by the integration of general purpose supercomputing centers into Grid environments. Some of the early experiences of bridging national eBorders in the Nordic region will also be...Go to contribution page
-
Peter Clarke30/09/2004, 11:30The global network is more than ever taking its role as the great "enabler" for many branches of science and research. Foremost amongst such science drivers is of course the LHC/LCG programme, although there are several other sectors with growing demands of the network. Common to all of these is the realisation that a straightforward over provisioned best efforts wide area IP...Go to contribution page
-
F. Fluckiger (CERN)30/09/2004, 12:00The Architectural Principles of the Internet have dominated the past decade. Orthogonal to the telecommunications industry principles, they dramatically changed the networking landscape because they relied on iconoclastic ideas. First, the Internet end-to-end principle, which stipulates that the network should intervene minimally on the end-to-end traffic, pushing the complexity to the...Go to contribution page
-
B. White (SLAC)30/09/2004, 14:00oral presentationDuring a recent visit to SLAC, Tim Berners-Lee challenged the High Energy Physics community to identify and implement HEP resources to which Semantic Web technologies could be applied. This challenge comes at a time when a number of other scientific disciplines (for example, bioinformatics and chemistry) have taken a strong initiative in making information resources compatible with...Go to contribution page
-
I. Antcheva (CERN)30/09/2004, 14:00Designing a usable, visually-attractive GUI is somewhat more difficult than it appears at a first glance. The users, the GUI designers and the programmers are three important parts involved in this process and everyone has a comprehensive view on the aspects of the application goals, as well as the steps that have to be taken to meet successfully the application requirements. The...Go to contribution page
-
30/09/2004, 14:00Track 5 - Distributed Computing Systems and Experiencesoral presentationProject SETI@HOME has proven to be one of the biggest successes of distributed computing during the last years. With a quite simple approach SETI manages to process huge amounts of data using a vast amount of distributed computer power. To extend the generic usage of these kinds of distributed computing tools, BOINC (Berkeley Open Infrastructure for Network Computing) is being...Go to contribution page
-
N. Neumeister (CERN / HEPHY VIENNA)30/09/2004, 14:00The CMS detector has a sophisticated four-station muon system made up of tracking chambers (Drift Tubes, Cathode Strip Chambers) and dedicated trigger chambers. A muon reconstruction software based on Kalman filter techniques has been developed which reconstructs muons in the standalone muon system, using information from all three types of muon detectors, and links the resulting muon...Go to contribution page
-
H. Newman (Caltech)30/09/2004, 14:00Wide area networks of sufficient, and rapidly increasing end-to-end capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of more than 1000 over the past decade, and the outlook is for a similar increase over the next...Go to contribution page
-
30/09/2004, 14:00The ATLAS experiment uses a tiered data Grid architecture that enables possibly overlapping subsets, or replicas, of the original set to be located across the ATLAS collaboration. The full set of experiment data is located at a single Tier 0 site, and then subsets of the data are located at national Tier 1 sites, smaller subsets at smaller regional Tier 2 sites, and so on. In order to...Go to contribution page
-
I. Hrivnacova (IPN, ORSAY, FRANCE)30/09/2004, 14:00In order for physicist to easily benefit from the different existing geometry tools used within the community, the Virtual Geometry Model (VGM) has been designed. In the VGM we introduce the abstract interfaces to geometry objects and an abstract factory for geometry construction, import and export. The interfaces to geometry objects were defined to be suitable to describe "geant-like"...Go to contribution page
-
G. Lo Re (INFN & CNAF Bologna)30/09/2004, 14:20Next generation high energy physics experiments planned at the CERN Large Hadron Collider is so demanding in terms of both computing power and mass storage that data and CPU's can not be concentrated in a single site and will be distributed on a computational Grid according to a "multi-tier". LHC experiments are made of several thousands of people from a few hundreds of institutes...Go to contribution page
-
P E. Tissot-Daguette (CERN)30/09/2004, 14:20Track 5 - Distributed Computing Systems and Experiencesoral presentationThe AliEn system, an implementation of the Grid paradigm developed by the ALICE Offline Project, is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The AliEn Web Portal is built around Open Source components with a backend based on Grid Services and compliant with the OGSA model. An easy and intuitive presentation layer gives the...Go to contribution page
-
J. Hrivnac (LAL)30/09/2004, 14:20Aspect-Oriented Programming (AOP) is a new paradigm promising to allow further modularization of large software frameworks, like those developed in HEP. Such frameworks often manifest several orthogonal axes of contracts (Crosscutting Concerns - CC) leading to complex multidepenencies. Currently used programing languages and development methodologies don't allow to easily identify and...Go to contribution page
-
I. Kisel (UNIVERSITY OF HEIDELBERG, KIRCHHOFF INSTITUTE OF PHYSICS)30/09/2004, 14:20Typical central Au-Au collision in the CBM experiment (GSI, Germany) will produce up to 700 tracks in the inner tracker. Large track multiplicity together with presence of nonhomogeneous magnetic field make reconstruction of events complicated. A cellular automaton method is used to reconstruct tracks in the inner tracker. The cellular automaton algorithm creates short track segments...Go to contribution page
-
Jerome LAURET (BROOKHAVEN NATIONAL LABORATORY)30/09/2004, 14:20While many success stories can be told as a product of the Grid middleware developments, most of the existing systems relying on workflow and job execution are based on integration of self-contained production systems interfacing with a given scheduling component or portal, or directly uses the base component of the Grid middleware (globus-job-run, globus-job-submit). However, such systems...Go to contribution page
-
M. Sutton (UNIVERSITY COLLEGE LONDON)30/09/2004, 14:20The current design, implementation and performance of the ZEUS global tracking trigger barrel algorithm are described. The ZEUS global tracking trigger integrates track information from the ZEUS central tracking chamber (CTD) and micro vertex detector (MVD) to obtain a global picture of the track topology in the ZEUS detector at the second level trigger stage. Algorithm processing is...Go to contribution page
-
C. Tull (LBNL/ATLAS)30/09/2004, 14:40In this paper we will discuss how Aspect-Oriented Programming (AOP) can be used to implement and extend the functionality of HEP architectures in areas such as performance monitoring, constraint checking, debugging and memory management. AOP is the latest evolution in the line of technology for functional decomposition which includes Structured Programming (SP) and Object-Oriented...Go to contribution page
-
Dr S. Ravot (Caltech)30/09/2004, 14:40In this paper we describe the current state of the art in equipment, software and methods for transferring large scientific datasets at high speed around the globe. We first present a short introductory history of the use of networking in HEP, some details on the evolution, current status and plans for the Caltech/CERN/DataTAG transAtlantic link, and a description of the topology and...Go to contribution page
-
N. Graf (SLAC)30/09/2004, 14:40We describe a Java toolkit for full event reconstruction and analysis. The toolkit is currently being used for detector design and physics analysis for a future linear e+ e- linear collider. The components are fully modular and are available for tasks from digitization of tracking detector signals through to cluster finding, pattern recognition, fitting, jetfinding, and analysis. We...Go to contribution page
-
R. Cavanaugh (UNIVERSITY OF FLORIDA)30/09/2004, 14:40A grid consists of high-end computational, storage, and network resources that, while known a priori, are dynamic with respect to activity and availability. Efficient co-scheduling of requests to use grid resources must adapt to this dynamic environment while meeting administrative policies. We discusses the necessary requirements of such a scheduler and introduce a distributed...Go to contribution page
-
Julia ANDREEVA (CERN)30/09/2004, 14:40Track 5 - Distributed Computing Systems and Experiencesoral presentationThe ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end...Go to contribution page
-
V. Tsulaia (UNIVERSITY OF PITTSBURGH)30/09/2004, 14:40The GeoModel toolkit is a library of geometrical primitives that can be used to describe detector geometries. The toolkit is designed as a data layer, and especially optimized in order to be able to describe large and complex detector systems with minimum memory consumption. Some of the techniques used to minimize the memory consumption are: shared instancing with reference counting,...Go to contribution page
-
S. MUZAFFAR (NorthEastern University, Boston, USA)30/09/2004, 15:00This paper describes recent developments in the IGUANA (Interactive Graphics for User ANAlysis) project. IGUANA is a generic framework and toolkit, used by CMS and D0, to build a variety of interactive applications such as detector and event visualisation and interactive GEANT3 and GEANT4 browsers. IGUANA is a freely available toolkit based on open-source components including...Go to contribution page
-
30/09/2004, 15:00The R-GMA (Relational Grid Monitoring Architecture) was developed within the EU DataGrid project, to bring the power of SQL to an information and monitoring system for the grid. It provides producer and consumer services to both publish and retrieve information from anywhere within a grid environment. Users within a Virtual Organization may define their own tables dynamically into...Go to contribution page
-
Vincenzo Innocente (CERN)30/09/2004, 15:00Bitmap indices have gained wide acceptance in data warehouse applications handling large amounts of read only data. High dimensional ad hoc queries can be efficiently performed by utilizing bitmap indices, especially if the queries cover only a subset of the attributes stored in the database. Such access patterns are common use in HEP analysis. Bitmap indices have been implemented by...Go to contribution page
-
M. Ballintijn (MIT)30/09/2004, 15:00Track 5 - Distributed Computing Systems and Experiencesoral presentationThe Parallel ROOT Facility, PROOF, enables a physicist to analyze and understand very large data sets on an interactive time scale. It makes use of the inherent parallelism in event data and implements an architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. Scaling to many hundreds of servers is essential to process tens or hundreds of...Go to contribution page
-
Dr J. Tanaka (ICEPP, UNIVERSITY OF TOKYO)30/09/2004, 15:00We have measured the performance of data transfer between CERN and our laboratory, ICEPP, at the University of Tokyo in Japan. The ICEPP will be one of the so-called regional centers for handling the data from the ATLAS experiment which will start data taking in 2007. More than petabytes of data are expected to be generated from the experiment each year. It is therefore essential to...Go to contribution page
-
W. Liebig (CERN)30/09/2004, 15:00The athena software framework for event reconstruction in ATLAS will be employed to analyse the data from the 2004 combined test beam. In this combined test beam, a slice of the ATLAS detector is operated and read out under conditions similar to future LHC running, thus providing a test-bed for the complete reconstruction chain. First results for the ATLAS InnerDetector will be...Go to contribution page
-
D. Adams (BNL)30/09/2004, 15:20Track 5 - Distributed Computing Systems and Experiencesoral presentationThe ATLAS distributed analysis (ADA) system is described. The ATLAS experiment has more that 2000 physicists from 150 insititutions in 34 countries. Users, data and processing are distributed over these sites. ADA makes use of a collection of high-level web services whose interfaces are expressed in terms of AJDL (abstract job definition language) which includes descriptions of...Go to contribution page
-
L. Moneta (CERN)30/09/2004, 15:20The main objective of the MathLib project is to give expertise and support to the LHC experiments on mathematical and statistical computational methods. The aim is to provide a coherent set of mathematical libraries. Users of this set of libraries are developers of experiment reconstruction and simulation software, of analysis tools frameworks, such as ROOT, and physicists performing...Go to contribution page
-
M. Sgaravatto (INFN Padova)30/09/2004, 15:20Resource management and scheduling of distributed, data-driven applications in a Grid environment are challenging problems. Although significant results were achieved in the past few years, the development and the proper deployment of generic, reliable, standard components present issues that still need to be completely solved. Interested domains include workload management,...Go to contribution page
-
Dr Y. Kodama (NATIONAL INSTITUTE OF ADVANCED INDUSTRIAL SCIENCE AND TECHNOLOGY (AIST))30/09/2004, 15:20It is important that the total bandwidth of the multiple streams should not exceed the network bandwidth in order to achieve a stable network flow with high performance in high bandwidth-delay product networks. Software control of bandwidth for each stream sometimes exceed the specified bandwidth. We proposed the hardware control technique for total bandwidth of multiple streams with...Go to contribution page
-
J. Drohan (University College London)30/09/2004, 15:20We describe the philosophy and design of Atlantis, an event visualisation program for the ATLAS experiment at CERN. Written in Java, it employs the Swing API to provide an easily configurable Graphical User Interface. Atlantis implements a collection of intuitive, data-orientated 2D projections, which enable the user to quickly understand and visually investigate complete ATLAS events....Go to contribution page
-
Mr M. Ivanov (CERN)30/09/2004, 15:20Tracks finding and fitting algorithm in ALICE Time projection chamber (TPC) and Inner Tracking System (ITS) based on the Kalman-filtering are presented. The filtering algorithm is able to cope with non-Gaussian noise and ambiguous measurements in high-density environments. The tracking algorithm consists of two parts: one for the TPC and one for the prolongation into the ITS. The...Go to contribution page
-
F. van Lingen (CALIFORNIA INSTITUTE OF TECHNOLOGY)30/09/2004, 15:40Track 5 - Distributed Computing Systems and Experiencesoral presentationIn this paper we report on the implementation of an early prototype of distributed high-level services supporting grid-enabled data analysis within the LHC physics community as part of the ARDA project within the context of the GAE (Grid Analysis Environment) and begin to investigate the associated complex behaviour of such an end-to-end system. In particular, the prototype...Go to contribution page
-
B K. Kim (UNIVERSITY OF FLORIDA), M. Mambelli (University of Chicago)30/09/2004, 15:40Grid computing involves the close coordination of many different sites which offer distinct computational and storage resources to the Grid user community. The resources at each site need to be monitored continuously. Static and dynamic site information need to be presented to the user community in a simple and efficient manner. This paper will present both the design and...Go to contribution page
-
Dr G B. Barrand (CNRS / IN2P3 / LAL)30/09/2004, 15:40Panoramix is an event display for LHCb. LaJoconde is an interactive environment over DaVinci, the analysis software layer for LHCb. We shall present global technological choices behind these two softwares : GUI, graphic, scripting, plotting. We shall present the connection to the framework (Gaudi), how we can integrate other tools like hippodraw. We shall present the overall...Go to contribution page
-
D. Brown (LAWRENCE BERKELEY NATIONAL LAB)30/09/2004, 15:40This talk will describe the new analysis computing model deployed by BaBar over the past year. The new model was designed to better support the current and future needs of physicists analyzing data, and to improve BaBar's analysis computing efficiency. The use of RootIO in the new model is described in other talks. Babar's new analysis data content format contains both high and low...Go to contribution page
-
M. Fischler (FERMILAB)30/09/2004, 15:40A new object-oriented Minimization package is available via the ZOOM cvs repository. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little...Go to contribution page
-
Mr M. Grigoriev (FERMILAB, USA)30/09/2004, 15:40Large, distributed HEP collaborations, such as D0, CDF and US-CMS, depend on stable and robust network paths between major world research centers. The evolving emphasis on data and compute Grids increases the reliance on network performance. FermiLab's experimental groups and network support personnel identified a critical need for WAN monitoring to ensure the quality and efficient...Go to contribution page
-
G. Asova (DESY ZEUTHEN)30/09/2004, 16:30The photo injector test facility at DESY Zeuthen (PITZ) was built to develop, operate and optimize photo injectors for future free electron lasers and linear colliders. In PITZ we use a DAQ system that stores data as a collection of ROOT files, forming our database for offline analysis. Consequently, the offline analysis will be performed by a ROOT application, written at least...Go to contribution page
-
30/09/2004, 16:30INTAS ( http://www.intas.be): International Association for the promotion of co-operation with scientists from the New Independent States of the former Soviet Union (NIS). INTAS encourages joint activities between its INTAS Members and the NIS in all exact and natural sciences, economics, human and social sciences. INTAS supports a number of NIS participants to attend the 2004...Go to contribution page
-
I. Legrand (CALTECH)30/09/2004, 16:30The MonALISA (MONitoring Agents in A Large Integrated Services Architecture) system is a scalable Dynamic Distributed Services Architecture which is based on the mobile code paradigm. An essential part of managing a global system, like the Grids, is a monitoring system that is able to monitor and track the many site facilities, networks, and all the task in progress, in real time....Go to contribution page
-
A. FARILLA (I.N.F.N. ROMA3)30/09/2004, 16:30A full slice of the barrel detector of the ATLAS experiment at the LHC is being tested this year with beams of pions, muons, electrons and photons in the energy range 1-300 GeV in the H8 area of the CERN SPS. It is a challenging exercise since, for the first time, the complete software suite developed for the full ATLAS experiment has been extended for use with real detector data,...Go to contribution page
-
E. Ronchieri (INFN CNAF)30/09/2004, 16:30The problem of finding the best match between jobs and computing resources is critical for an efficient work load distribution in Grids. Very often jobs are preferably run on the Computing Elements (CEs) that can retrieve a copy of the input files from a local Storage Element (SE). This requires that multiple file copies are generated and managed by a data replication system. We...Go to contribution page
-
30/09/2004, 16:30Track 5 - Distributed Computing Systems and Experiencesoral presentationAny physicist who will analyse data from the LHC experiments will have to deal with data and computing resources which are distributed across multiple locations and with different access methods. GANGA helps the end user by tying in specifically to the solutions for a given experiment ranging from specification of data to retrieval and post-processing of produced output. For LHCb and ATLAS...Go to contribution page
-
M. Donszelmann (SLAC)30/09/2004, 16:30WIRED 4 is a experiment independent event display plugin module for JAS 3 (Java Analysis Studio) generic analysis framework. Both WIRED and JAS are written in Java. WIRED, which uses HepRep (HEP Representables for Event Display) as its input format, supports viewing of events using either conventional 3D projections as well as specialized projections such as a fish-eye or a rho-Z...Go to contribution page
-
30/09/2004, 16:50A kinematic fit package was developed based on Least Means Squared minimization with Lagrange multipliers and Kalman filter techniques and implemented in the framework of the CMS reconstruction program. The package allows full decay chain reconstruction from final state to primary vertex according to the given decay model. The class framework allowing decay tree description on every...Go to contribution page
-
Andreas PFEIFFER (CERN)30/09/2004, 16:50CLHEP is a set of HEP-specific foundation and utility classes such as random number generators, physics vectors, and particle data tables. Although CLHEP has traditionally been distributed as one large library, the user community has long wanted to build and use CLHEP packages separately. With the release of CLHEP 1.9, CLHEP has been reorganized and enhanced to enable building and...Go to contribution page
-
N. De Bortoli (INFN - NAPLES (ITALY))30/09/2004, 16:50GridICE is a monitoring service for the Grid, it measures significant Grid related resources parameters in order to analyze usage, behavior and performance of the Grid and/or to detect and notify fault situations, contract violations, and user-defined events. In its first implementation, the notification service relies on a simple model based on a pre-defined set of events. The growing...Go to contribution page
-
P. DeMar (FERMILAB)30/09/2004, 16:50Advanced optical-based networks have the capacity and capability to meet the extremely large data movement requirements of particle physics collaborations. To date, research efforts in the advanced network area have been primarily been focused on provisioning, dynamically configuring, and monitoring the wide area optical network infrastructure itself. Application use of these...Go to contribution page
-
Dr J. LIST (University of Wuppertal)30/09/2004, 16:50Analyses in high-energy physics often involve the filling of large amounts of histograms from n-tuple like data structures, e.g. RooT trees. Even when using an object-oriented framework like RooT, a the user code often follows a functional programming approach, where booking, application of cuts, calculation of weights and histogrammed quantities and finally the filling of the...Go to contribution page
-
N. De Filippis (UNIVERSITA' DEGLI STUDI DI BARI AND INFN)30/09/2004, 16:50Track 5 - Distributed Computing Systems and Experiencesoral presentationDuring the CMS Data Challenge 2004 a realtime analysis was attempted at INFN and PIC Tier-1 and Tier-2s in order to test the ability of the instrumented methods to quickly process the data. Several agents and automatic procedures were implemented to perform the analysis at the Tier-1/2 synchronously with the data transfer from Tier-0 at CERN. The system was implemented in the Grid...Go to contribution page
-
Dr T. Speer (UNIVERSITY OF ZURICH, SWITZERLAND)30/09/2004, 17:10A vertex fit algorithm was developed based on the Gaussian-sum filter (GSF) and implemented in the framework of the CMS reconstruction program. While linear least-squares estimators are optimal in case all observation errors are Gaussian distributed, the GSF offers a better treatment of the non-Gaussian distribution of track parameter errors when these are modeled by Gaussian...Go to contribution page
-
D. Smith (STANFORD LINEAR ACCELERATOR CENTER)30/09/2004, 17:10The BaBar experiment has migrated its event store from an objectivity based system to a system using ROOT-files, and along with this has developed a new bookkeeping design. This bookkeeping now combines data production, quality control, event store inventory, distribution of BaBar data to sites and user analysis in one central place, and is based on collections of data stored as...Go to contribution page
-
R. Hughes-Jones (THE UNIVERSITY OF MANCHESTER)30/09/2004, 17:10How do we get High Throughput data transport to real users? The MB-NG project is a major collaboration which brings together expertise from users, industry, equipment providers and leading edge e-science application developers. Major successes in the areas of Quality of Service (QoS) and managed bandwidth have provided a leading edge U.K. Diffserv enabled network running at 2.5 Gbit/s....Go to contribution page
-
K. Wu (LAWRENCE BERKELEY NATIONAL LAB)30/09/2004, 17:10Track 5 - Distributed Computing Systems and Experiencesoral presentationNuclear and High Energy Physics experiments such as STAR at BNL are generating millions of files with PetaBytes of data each year. In most cases, analysis programs have to read all events in a file in order to find the interesting ones. Since most analyses are only interested in some subsets of events in a number of files, a significant portion of the computer time is wasted on...Go to contribution page
-
Dr P. MATO (CERN)30/09/2004, 17:10Bender, the Python based physics analysis application for LHCb combines the best features of underlying Gaudi C++ software architecture with the flexibility of Python scripting language and provides end-users with friendly physics analysis oriented environment. It is based in one hand, on the generic Python bindings for the Gaudi framework, called GaudiPython, and in the other hand on an...Go to contribution page
-
E. Ronchieri (INFN CNAF)30/09/2004, 17:10We described the process for handling software builds and realeases for the Workload Management package of the DataGrid project. The software development in the project was shared among nine contractual partners, in seven different countries, and was organized in work-packages covering different areas. In this paper, we discuss how a combination of Concurrent Version System,...Go to contribution page
-
M. Sanchez-Garcia (UNIVERSITY OF SANTIAGO DE COMPOSTELA)30/09/2004, 17:30The LHCb Data Challenge 04 includes the simulation of over 200 M simulated events using distributed computing resources on N sites and extending along 3 months. To achieve this goal a dedicated Production grid (DIRAC) has been deployed. We will present the Job Monitoring and Accounting services developed to follow the status of the production along its way and to evaluate the results at...Go to contribution page
-
M.G. Pia (INFN GENOVA)30/09/2004, 17:30Statistical methods play a significant role throughout the life- cycle of HEP experiments, being an essential component of physics analysis. We present a project in progress for the development of an object-oriented software toolkit for statistical data analysis. More in particular, the Statistical Comparison component of the toolkit provides algorithms for the comparison of data...Go to contribution page
-
Mr G. Roediger (CORPORATE COMPUTER SERVICES INC. - FERMILAB)30/09/2004, 17:30A High Energy Physics experiment has between 200 and 1000 collaborating physicists from nations spanning the entire globe. Each collaborator brings a unique combination of interests, and each has to search through the same huge heap of messages, research results, and other communication to find what is useful. Too much scientific information is as useless as too little. It is time...Go to contribution page
-
T. Johnson (SLAC)30/09/2004, 17:30Track 5 - Distributed Computing Systems and Experiencesoral presentationThe aim of the service is to allow fully distributed analysis of large volumes of data while maintaining true (sub-second) interactivity. All the Grid related components are based on OGSA style Grid services, and to the maximum extent uses existing Globus Toolkit 3.0 (GT3) services. All transactions are authenticated and authorized using GSI (Grid Security Infrastructure) mechanism -...Go to contribution page
-
A. Pfeiffer (CERN, PH/SFT)30/09/2004, 17:30In the context of the SPI project in the LCG Application Area, a centralized s/w management infrastructure has been deployed. It comprises of a suite of scripts handling the building and validating of the releases of the various projects as well as providing a customized packaging of the released s/w. Emphasis was put on the flexibility of the packaging and distribution solution as it...Go to contribution page
-
A. Wildauer (UNIVERSITY OF INNSBRUCK)30/09/2004, 17:30For physics analysis in ATLAS, reliable vertex finding and fitting algorithms are important. In the harsh enviroment of the LHC (~ 23 inelastic collissions every 25 ns) this task turns out to be particularily challenging. One of the guiding principles in developing the vertexing packages is a strong focus on modularity and defined interfaces using the advantages of object oriented C++....Go to contribution page
-
G R. Moloney30/09/2004, 17:50Track 5 - Distributed Computing Systems and Experiencesoral presentationWe have developed and deployed a data grid for the processing of data from the Belle experiment, and for the production of simulated Belle data. The Belle Analysis Data Grid brings together compute and storage resources across five separate partners in Australia, and the Computing Research Centre at the KEK laboratory in Tsukuba, Japan. The data processing resouces are general...Go to contribution page
-
Mr P. Galvez (CALTECH)30/09/2004, 17:50VRVS (Virtual Room Videoconferencing System) is a unique, globally scalable next-generation system for real-time collaboration by small workgroups, medium and large teams engaged in research, education and outreach. VRVS operates over an ensemble of national and international networks. Since it went into production service in early 1997, VRVS has become a standard part of the toolset used...Go to contribution page
-
I. Belikov (CERN)30/09/2004, 17:50One of the main features of the ALICE detector at LHC is the capability to identify particles in a very broad momentum range from 0.1 GeV/c up to 10 GeV/c. This can be achieved only by combining, within a common setup, several detecting systems that are efficient in some narrower and complementary momentum sub- ranges. The situation is further complicated by the amount of data to be...Go to contribution page
-
E. Efstathiadis (BROOKHAVEN NATIONAL LABORATORY)30/09/2004, 17:50As a PPDG cross-team joint project, we proposed to study, develop, implement and evaluate a set of tools that allow Meta-Schedulers to take advantage of consistent information (such as information needed for complex decision making mechanisms) across both local and/or Grid Resource Management Systems (RMS). We will present and define the requirements and schema by which one can...Go to contribution page
-
V. Serbo (SLAC)30/09/2004, 17:50JASSimApp is joint project of SLAC, KEK, and Naruto University to create integrated GUI for Geant4, based on JAS3 framework, with ability to interactively: - Edit Geant4 geometry, materials, and physics processes - Control Geant4 execution, local and remote: pass commands and receive output, control event loop - Access AIDA histograms defined in Geant4 - Show generated...Go to contribution page
-
M. GALLAS (CERN)30/09/2004, 17:50Software Quality Assurance is an integral part of the software development process of the LCG Project and includes several activities such as automatic testing, test coverage reports, static software metrics reports, bug tracker, usage statistics and compliance to build, code and release policies. As a part of QA activity all levels of the sw-testing should be run as...Go to contribution page
-
A. TSAREGORODTSEV (CNRS-IN2P3-CPPM, MARSEILLE)30/09/2004, 18:10DIRAC is the LHCb distributed computing grid infrastructure for MC production and analysis. Its architecture is based on a set of distributed collaborating services. The service decomposition broadly follows the ARDA project proposal, allowing for the possibility of interchanging the EGEE/ARDA and DIRAC components in the future. Some components developed outside the DIRAC project are...Go to contribution page
-
E. Vaandering (VANDERBILT UNIVERSITY)30/09/2004, 18:10Genetic programming is a machine learning technique, popularized by Koza in 1992, in which computer programs which solve user-posed problems are automatically discovered. Populations of programs are evaluated for their fitness of solving a particular problem. New populations of ever increasing fitness are generated by mimicking the biological processes underlying evolution. These...Go to contribution page
-
A. Sill (TEXAS TECH UNIVERSITY)30/09/2004, 18:10Track 5 - Distributed Computing Systems and Experiencesoral presentationTo maximize the physics potential of the data currently being taken, the CDF collaboration at Fermi National Accelerator Laboratory has started to deploy user analysis computing facilities at several locations throughout the world. Over 600 users are signed up and able to submit their physics analysis and simulation applications directly from their desktop or laptop computers to these...Go to contribution page
-
G. Eulisse (NORTHEASTERN UNIVERSITY OF BOSTON (MA) U.S.A.)30/09/2004, 18:10A fundamental part of software development is to detect and analyse weak spots of the programs to guide optimisation efforts. We present a brief overview and usage experience on some of the most valuable open- source tools such as valgrind and oprofile. We describe their main strengths and weaknesses as experienced by the CMS experiment. As we have found that these tools do not satisfy...Go to contribution page
-
Mrs L. Ma (INSTITUTE OF HIGH ENERGY PHYSICS)30/09/2004, 18:10Network security at IHEP is becoming one of the most important issues of computing environment. To protect its computing and network resources against attacks and viruses from outside of the institute, security measures to combat these are implemented. To enforce security policy the network infrastructure was re-configured to one intranet and two DMZ areas. New rules to control the...Go to contribution page
-
Mark DONSZELMANN (Extensions to JAS)30/09/2004, 18:10JAS3 is a general purpose, experiment independent, open-source, data analysis tool. JAS3 includes a variety of features, including histograming, plotting, fitting, data access, tuple analysis, spreadsheet and event display capabilities. More complex analysis can be performed using several scripting languages (pnuts, jython, etc.), or by writing Java analysis classes. All of these...Go to contribution page
-
Dr Pierre Vande Vyvre (CERN)01/10/2004, 08:30
-
Stephen Gowdy (SLAC)01/10/2004, 08:55
-
Philippe Canal (FNAL)01/10/2004, 09:20
-
Massimo LAMANNA (CERN)01/10/2004, 09:45
-
Douglas OLSON01/10/2004, 10:40
-
Tim Smith (CERN)01/10/2004, 11:05
-
Peter CLARKE01/10/2004, 11:30
-
L. BAUERDICK (FNAL)01/10/2004, 11:55
-
Wolfgang von Rueden (CERN/ALE)01/10/2004, 12:25
Choose timezone
Your profile timezone: