27 September 2004 to 1 October 2004
Interlaken, Switzerland
Europe/Zurich timezone

Contribution List

423 out of 423 displayed
Export to PDF
  1. Wolfgang von Rueden (CERN)
    27/09/2004, 09:00
    Plenary Sessions
    oral presentation
  2. David Williams
    27/09/2004, 09:30
    Plenary Sessions
    oral presentation
    "Where are your Wares" Computing in the broadest sense has a long history, and Babbage (1791-1871), Hollerith (1860-1929) Zuse (1910-1995), many other early pioneers, and the wartime code breakers, all made important breakthroughs. CERN was founded as the first valve-based digital computers were coming onto the market. I will consider 50 years of Computing at CERN from the...
    Go to contribution page
  3. A. Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)
    27/09/2004, 10:00
    Plenary Sessions
    oral presentation
    In support of the Tevatron physics program, the Run II experiments have developed computing models and hardware facilities to support data sets at the petabyte scale, currently corresponding to 500 pb-1 of data and over 2 years of production operations. The systems are complete from online data collection to user analysis, and make extensive use of central services and common solutions...
    Go to contribution page
  4. N. KATAYAMA (KEK)
    27/09/2004, 11:00
    Plenary Sessions
    oral presentation
    The Belle experiment operates at the KEKB accelerator, a high luminosity asymmetric energy e+ e- machine. KEKB has achieved the world highest luminosity of 1.39 times 10^34 cm-2s-1. Belle accumulates more than 1 million B Bbar pairs in one good day. This corresponds to about 1.2 TB of raw data per day. The amount of the raw and processed data accumulated so far exceeds 1.4 PB....
    Go to contribution page
  5. P. ELMER (Princeton University)
    27/09/2004, 11:30
    Plenary Sessions
    oral presentation
    The BaBar experiment at SLAC studies B-physics at the Upsilon(4S) resonance using the high-luminosity e+e- collider PEP-II at the Stanford Linear Accelerator Center (SLAC). Taking, processing and analyzing the very large data samples is a significant computing challenge. This presentation will describe the entire BaBar computing chain and illustrate the solutions chosen as well as...
    Go to contribution page
  6. M. Purschke (Brookhaven National Laboratory)
    27/09/2004, 12:00
    Plenary Sessions
    oral presentation
    The concepts and technologies applied in data acquisition systems have changed dramatically over the past 15 years. Generic DAQ components and standards such as CAMAC and VME have largely been replaced by dedicated FPGA and ASIC boards, and dedicated real-time operation systems like OS9 or VxWorks have given way to Linux- based trigger processor and event building farms. We have also...
    Go to contribution page
  7. J. Nogiec (FERMI NATIONAL ACCELERATOR LABORATORY)
    27/09/2004, 14:00
    Track 3 - Core Software
    oral presentation
    The paper describes a component-based framework for data stream processing that allows for configuration, tailoring, and run-time system reconfiguration. The systemโ€™s architecture is based on a pipes and filters pattern, where data is passed through routes between components. Components process data and add, substitute, and/or remove named data items from a data stream. They can also...
    Go to contribution page
  8. G. Cancio (CERN)
    27/09/2004, 14:00
    Track 6 - Computer Fabrics
    oral presentation
    This paper describes the evolution of fabric management at CERN's T0/T1 Computing Center, from the selection and adoption of prototypes produced by the European DataGrid (EDG) project[1] to enhancements made to them. In the last year of the EDG project, developers and service managers have been working to understand and solve operational and scalability issues. CERN has adopted and...
    Go to contribution page
  9. M. Branco (CERN)
    27/09/2004, 14:00
    Track 4 - Distributed Computing Services
    oral presentation
    As part of the ATLAS Data Challenges 2 (DC2), an automatic production system was introduced and with it a new data management component. The data management tools used for previous Data Challenges were built as separate components from the existing Grid middleware. These tools relied on a database of its own which acted as a replica catalog. With the extensive use of Grid technology...
    Go to contribution page
  10. Dr P. Bartalini (CERN)
    27/09/2004, 14:00
    Track 2 - Event processing
    oral presentation
    In the framework of the LCG Simulation Project, we present the Generator Services Sub-project, launched in 2003 under the oversight of the LHC Monte Carlo steering group (MC4LHC). The goal of the Generator Services Subproject is to guarantee the physics generator support for the LHC experiments. Work is divided into four work packages: Generator library; Storage, event interfaces and...
    Go to contribution page
  11. T.M. Steinbeck (KIRCHHOFF INSTITUTE OF PHYSICS, RUPRECHT-KARLS-UNIVERSITY HEIDELBERG, for the Alice Collaboration)
    27/09/2004, 14:00
    Track 1 - Online Computing
    oral presentation
    The Alice High Level Trigger (HLT) is foreseen to consist of a cluster of 400 to 500 dual SMP PCs at the start-up of the experiment. It's input data rate can be up to 25GB/s. This has to be reduced to at most 1.2 GB/s before the data is sent to DAQ through event selection, filtering, and data compression. For these processing purposes, the data is passed through the cluster in...
    Go to contribution page
  12. A. Ceseracciu (SLAC / INFN PADOVA)
    27/09/2004, 14:00
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The Event Reconstruction Control System of the BaBar experiment was redesigned in 2002, to satisfy the following major requirements: flexibility and scalability. Because of its very nature, this system is continuously maintained to implement the changing policies, typical of a complex, distributed production enviromnent. In 2003, a major revolution in the BaBar computing model, the...
    Go to contribution page
  13. Tomasz WLODEK (BNL)
    27/09/2004, 14:20
    Track 6 - Computer Fabrics
    oral presentation
    This presentation describes the experiences and the lessons learned by the RHIC/ATLAS Computing Facility (RACF) in building and managing its 2,700+ CPU (and growing) Linux Farm over the past 6+ years. We describe how hardware cost, end-user needs, infrastructure, footprint, hardware configuration, vendor selection, software support and other considerations have played a role in...
    Go to contribution page
  14. Dr F. Beaudette (CERN)
    27/09/2004, 14:20
    Track 2 - Event processing
    oral presentation
    An object-oriented FAst MOnte-Carlo Simulation (FAMOS) has recently been developed for CMS to allow rapid analyses of all final states envisioned at the LHC while keeping a high degree of accuracy for the detector material description and the related particle interactions. For example, the simulation of the material effects in the tracker layers includes charged particle energy loss by...
    Go to contribution page
  15. M. Ernst (DESY)
    27/09/2004, 14:20
    Track 4 - Distributed Computing Services
    oral presentation
    The LHC needs to achieve reliable high performance access to vastly distributed storage resources across the network. USCMS has worked with Fermilab-CD and DESY-IT on a storage service that was deployed at several sites. It provides Grid access to heterogeneous mass storage systems and synchronization between them. It increases resiliency by insulating clients from storage and network...
    Go to contribution page
  16. J. Andreeva (UC Riverside)
    27/09/2004, 14:20
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    One of the goals of CMS Data Challenge in March-April 2004 (DC04) was to run reconstruction for sustained period at 25 Hz input rate with distribution of the produced data to CMS T1 centers for further analysis. The reconstruction was run at the T0 using CMS production software, of which the main components are RefDB (CMS Monte Carlo 'Reference Database' with Web interface) and McRunjob...
    Go to contribution page
  17. M. Sutton (UNIVERSITY COLLEGE LONDON)
    27/09/2004, 14:20
    Track 1 - Online Computing
    oral presentation
    The architecture and performance of the ZEUS Global Track Trigger (GTT) are described. Data from the ZEUS silicon Micro Vertex detector's HELIX readout chips, corresponding to 200k channels, are digitized by 3 crates of ADCs and PowerPC VME board computers push cluster data for second level trigger processing and strip data for event building via Fast and GigaEthernet network...
    Go to contribution page
  18. R. Chytracek (CERN)
    27/09/2004, 14:20
    Track 3 - Core Software
    oral presentation
    This paper describes the component model that has been developed in the context of the LCG/SEAL project. This component model is an attempt to handle the increasing complexity in the current data processing applications of LHC experiments. In addition, it should facilitate the software re-use by the integration of software components from LCG and non-LCG into the experiment's...
    Go to contribution page
  19. A. Di Mattia (INFN)
    27/09/2004, 14:40
    Track 1 - Online Computing
    oral presentation
    The Atlas Level-2 trigger provides a software-based event selection after the initial Level-1 hardware trigger. For the muon events, the selection is decomposed in a number of broad steps: first, the Muon Spectrometer data are processed to give physics quantities associated to the muon track (standalone features extraction) then, other detector data are used to refine the extracted...
    Go to contribution page
  20. G. Battistoni (INFN Milano, Italy)
    27/09/2004, 14:40
    Track 2 - Event processing
    oral presentation
    The FLUKA Monte Carlo transport code is being used for different applications in High Energy, Cosmic Ray and Accelerator Physics. Here we review some of the ongoing projects which are based on this simulation tool. In particular, as far as accelerator physics is concerned, we wish to summarize the work in progress for the LHC and the CNGS project. From the point of view of experimental...
    Go to contribution page
  21. L. GOOSSENS (CERN)
    27/09/2004, 14:40
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    In order to validate the Offline Computing Model and the complete software suite, ATLAS is running a series of Data Challenges (DC). The main goals of DC1 (July 2002 to April 2003) were the preparation and the deployment of the software required for the production of large event samples, and the production of those samples as a worldwide distributed activity. DC2 (May 2004 until...
    Go to contribution page
  22. L. Lueking (FERMILAB)
    27/09/2004, 14:40
    Track 4 - Distributed Computing Services
    oral presentation
    A high performance system has been assembled using standard web components to deliver database information to a large number (thousands?) of broadly distributed clients. The CDF Experiment at Fermilab is building processing centers around the world imposing a high demand load on their database repository. For delivering read-only data, such as calibrations, trigger information and run...
    Go to contribution page
  23. Don Petravick
    27/09/2004, 14:40
    Track 6 - Computer Fabrics
    oral presentation
    As part of the DOE SciDAC "National Infrastructure for Lattice Gauge Computing" project, Fermilab builds and operates production clusters for lattice QCD simulations. We currently operate three clusters: a 128-node dual Xeon Myrinet cluster, a 128-node Pentium 4E Myrinet cluster, and a 32-node dual Xeon Infiniband cluster. We will discuss the operation of these systems and examine their...
    Go to contribution page
  24. S. Roiser (CERN)
    27/09/2004, 14:40
    Track 3 - Core Software
    oral presentation
    The C++ programming language has very limited capabilities for reflection information about its objects. In this paper a new reflection system will be presented, which allows complete introspection of C++ objects and has been developed in the context of the CERN/LCG/SEAL project in collaboration with the ROOT project. The reflection system consists of two different parts. The first...
    Go to contribution page
  25. S. Pardi (DIPARTIMENTO DI MATEMATICA ED APPLICAZIONI "R.CACCIOPPOLI")
    27/09/2004, 15:00
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The standard procedures for the extraction of gravitational wave signals coming from coalescing binaries provided by the output signal of an interferometric antenna may require computing powers generally not available in a single computing centre or laboratory. A way to overcome this problem consists in using the computing power available in different places as a single geographically...
    Go to contribution page
  26. M. Ye (INSTITUTE OF HIGH ENERGY PHYSICS, ACADEMIA SINICA)
    27/09/2004, 15:00
    Track 1 - Online Computing
    oral presentation
    This article introduces a Embedded Linux System based on vme series PowerPC as well as the base method on how to establish the system. The goal of the system is to build a test system of VMEbus device. It also can be used to setup the data acquisition and control system. Two types of compiler are provided by the developer system according to the features of the system and the...
    Go to contribution page
  27. Dirk Duellmann
    27/09/2004, 15:00
    Track 4 - Distributed Computing Services
    oral presentation
    While there are differences among the LHC experiments in their views of the role of databases and their deployment, there is relatively widespread agreement on a number of principles: 1. Physics codes will need access to database-resident data. The need for database access is not confined to middleware and services: physics-related data will reside in databases. 2. ...
    Go to contribution page
  28. W. LAVRIJSEN (LBNL)
    27/09/2004, 15:00
    Track 3 - Core Software
    oral presentation
    Python is a flexible, powerful, high-level language with excellent interactive and introspective capabilities and a very clean syntax. As such it can be a very effective tool for driving physics analysis. Python is designed to be extensible in low-level C-like languages, and its use as a scientific steering language has become quite widespread. To this end, existing and...
    Go to contribution page
  29. S. Thorn
    27/09/2004, 15:00
    Track 6 - Computer Fabrics
    oral presentation
    ScotGrid is a prototype regional computing centre formed as a collaboration between the universities of Durham, Edinburgh and Glasgow as part of the UK's national particle physics grid, GridPP. We outline the resources available at the three core sites and our optimisation efforts for our user communities. We discuss the work which has been conducted in extending the centre to embrace new...
    Go to contribution page
  30. L. Pinsky (UNIVERSITY OF HOUSTON)
    27/09/2004, 15:00
    Track 2 - Event processing
    oral presentation
    The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus...
    Go to contribution page
  31. J. Rodriguez (UNIVERSITY OF FLORIDA)
    27/09/2004, 15:20
    Track 6 - Computer Fabrics
    oral presentation
    The High Energy Physics Group at the University of Florida is involved in a variety of projects ranging from High Energy Experiments at hadron and electron positron colliders to cutting edge computer science experiments focused on grid computing. In support of these activities members of the Florida group have developed and deployed a local computational facility which consists of...
    Go to contribution page
  32. Victor SERBO (AIDA)
    27/09/2004, 15:20
    Track 3 - Core Software
    oral presentation
    AIDA, Abstract Interfaces for Data Analysis, is a set of abstract interfaces for data analysis components: Histograms, Ntuples, Functions, Fitter, Plotter and other typical analysis categories. The interfaces are currently defined in Java, C++ and Python and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist), Java (Java Analysis Studio) and...
    Go to contribution page
  33. O. Smirnova (Lund University, Sweden)
    27/09/2004, 15:20
    Track 4 - Distributed Computing Services
    oral presentation
    The NorduGrid middleware, ARC, has integrated support for querying and registering to Data Indexing services such as the Globus Replica Catalog and Globus Replica Location Server. This support allows one to use these Data Indexing services for for example brokering during job-submission, automatic registration of files and many other things. This integrated support is complemented by a...
    Go to contribution page
  34. Dr J. Apostolakis (CERN)
    27/09/2004, 15:20
    Track 2 - Event processing
    oral presentation
    Geant4 is relied upon in production for increasing number of HEP experiments and for applications in several other fields. Its capabilities continue to be extended, as its performance and modelling are enhanced. This presentation will give an overview of recent developments in diverse areas of the toolkit. These will include, amongst others, the optimisation for complex setups...
    Go to contribution page
  35. P. Buncic (CERN)
    27/09/2004, 15:20
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    AliEn (ALICE Environment) is a Grid framework developed by the Alice Collaboration and used in production for almost 3 years. From the beginning, the system was constructed using Web Services and standard network protocols and Open Source components. The main thrust of the development was on the design and implementation of an open and modular architecture. A large part of the component...
    Go to contribution page
  36. G. CHEN (COMPUTING CENTER,INSTITUTE OF HIGH ENERGY PHYSICS,CHINESE ACADEMY OF SCIENCES)
    27/09/2004, 15:20
    Track 1 - Online Computing
    oral presentation
    BES is an experiment on Beijing Electron-Positron Collider (BEPC). BES computing environment consists of PC/Linux cluster and mainly relies on the free software. OpenPBS and Ganglia are used as job schedule and monitor system. With helps from CERN IT Division, CASTOR was implemented as storage management system. BEPC is being upgraded and luminosity will increase one hundred times...
    Go to contribution page
  37. H. Kornmayer (FORSCHUNGSZENTRUM KARLSRUHE (FZK))
    27/09/2004, 15:40
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The observation of high-energetic gamma-rays with ground based air cerenkov telescopes is one of the most exciting areas in modern astro particle physics. End of the year 2003 the MAGIC telescope started operation.The low energy threshold for gamma-rays together with different background sources leads to a considerable amount of data. The analysis will be done in different institutes...
    Go to contribution page
  38. S. Canon (NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER)
    27/09/2004, 15:40
    Track 6 - Computer Fabrics
    oral presentation
    Supporting multiple large collaborations on shared compute farms has typically resulted in divergent requirements from the users on the configuration of these farms. As the frameworks used by these collaborations are adapted to use Grids, this issue will likely have a significant impact on the effectiveness of Grids. To address these issues, a method was developed at Lawrence Berkeley...
    Go to contribution page
  39. J-P. Baud (CERN)
    27/09/2004, 15:40
    Track 4 - Distributed Computing Services
    oral presentation
    LCG-2 is the collective name for the set of middleware released for use on the LHC Computing Grid in December 2003. This middleware, based on LCG-1, had already several improvements in the Data Management area. These included the introduction of the Grid File Access Library(GFAL), a POSIX-like I/O Interface, along with MSS integration via the Storage Resource...
    Go to contribution page
  40. H. Essel (GSI)
    27/09/2004, 15:40
    Track 3 - Core Software
    oral presentation
    The GSI online-offline analysis system Go4 is a ROOT based framework for medium energy ion- and nuclear physics experiments. Its main features are a multithreaded online mode with a non-blocking Qt GUI, and abstract user interface classes to set up the analysis process itself which is organised as a list of subsequent analysis steps. Each step has its own event objects and a processor...
    Go to contribution page
  41. A. Ribon (CERN)
    27/09/2004, 15:40
    Track 2 - Event processing
    oral presentation
    In the framework of the LCG Simulation Physics Validation Project, we present comparison studies between the GEANT4 and FLUKA shower packages and LHC sub-detector test-beam data. Emphasis is given to the response of LHC calorimeters to electrons, photons, muons and pions. Results of "simple-benchmark" studies, where the above simulation packages are compared to data from nuclear...
    Go to contribution page
  42. H-J. Mathes (FORSCHUNGSZENTRUM KARLSRUHE, INSTITUT FรผR KERNPHYSIK)
    27/09/2004, 15:40
    Track 1 - Online Computing
    oral presentation
    S.Argiro`(1), A. Kopmann (2), O.Martineau (2), H.-J. Mathes (2) for the Pierre Auger Collaboration (1) INFN, Sezione Torino (2) Forschungszentrum Karlsruhe The Pierre Auger Observatory currently under construction in Argentina will investigate extensive air showers at energies above 10^18 eV. It consists of a ground array of 1600 Cherenkov water detectors and 24 fluorescence...
    Go to contribution page
  43. S. Burke (Rutherford Appleton Laboratory)
    27/09/2004, 16:30
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The European DataGrid (EDG) project ran from 2001 to 2004, with the aim of producing middleware which could form the basis of a production Grid, and of running a testbed to demonstrate the middleware. HEP experiments (initially the four LHC experiments and subsequently BaBar and D0) were involved from the start in specifying requirements, and subsequently in evaluating the performance...
    Go to contribution page
  44. I. Sourikova (BROOKHAVEN NATIONAL LABORATORY)
    27/09/2004, 16:30
    Track 1 - Online Computing
    oral presentation
    To benefit from substantial advancements in Open Source database technology and ease deployment and development concerns with Objectivity/DB, the Phenix experiment at RHIC is migrating its principal databases from Objectivity to a relational database management system (RDBMS). The challenge of designing a relational DB schema to store a wide variety of calibration classes was ...
    Go to contribution page
  45. P. DeMar (FNAL)
    27/09/2004, 16:30
    Track 6 - Computer Fabrics
    oral presentation
    Management of large site network such as FNAL LAN presents many technical and organizational challenges. This highly dynamic network consists of around 10 thousand network nodes. The nature of the activities FNAL is involved in and its computing policy require that the network remains as open as reasonably possible both in terms of connectivity to the outside networks and in with respect...
    Go to contribution page
  46. G B. Barrand (CNRS / IN2P3 / LAL)
    27/09/2004, 16:30
    Track 3 - Core Software
    oral presentation
    We want to present the status of this project. After quickly remembering the basic choices around GUI, visualization and scriptingm we would like to develop what had been done in order to have an AIDA-3.2.1 complient systen, to visualize Geant4 data (G4Lab module), to visualize ROOT data (Mangrove module), to have an hippodraw module and what had been done in order to run on MacOSX...
    Go to contribution page
  47. Prof. V. Ivantchenko (CERN, ESA)
    27/09/2004, 16:30
    Track 2 - Event processing
    oral presentation
    We will summarize the recent and current activities of the Geant4 working group responsible of the standard package of electromagnetic physics. The major recent activities include an design iteration in energy loss and multiple scattering domain providing "process versus models" approach, and development of the following physics models: multiple scattering, ultra relativistic muon...
    Go to contribution page
  48. A. Hanushevsky (SLAC)
    27/09/2004, 16:30
    Track 4 - Distributed Computing Services
    oral presentation
    As the BaBar experiment shifted its computing model to a ROOT-based framework, we undertook the development of a high-performance file server as the basis for a fault-tolerant storage environment whose ultimate goal was to minimize job failures due to server failures. Capitalizing on our five years of experience with extending Objectivity's Advanced Multithreaded Server (AMS), elements...
    Go to contribution page
  49. M. Schulz (CERN)
    27/09/2004, 16:50
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    LCG2 is a large scale production grid formed by more than 40 worldwide distributed sites. The aggregated number of CPUs exceeds 3000 several MSS systems are integrated in the system. Almost all sites form an independent administrative domain. On most of the larger sites the local computing resources have been integrated into the grid. The system has been used for large scale...
    Go to contribution page
  50. J. VanWezel (FORSCHUNGZENTRUM KARLSRUHE)
    27/09/2004, 16:50
    Track 6 - Computer Fabrics
    oral presentation
    The HEP experiments that use the regional center GridKa will handle large amounts of data. Traditional access methods via local disks or large network storage servers show limitations in size, throughput or data management flexibility. High speed interconnects like Fibre Channel, iSCSI or Infiniband as well as parallel file systems are becoming increasingly important in large cluster...
    Go to contribution page
  51. M.G. Pia (INFN GENOVA)
    27/09/2004, 16:50
    Track 2 - Event processing
    oral presentation
    Various experimental configurations - such as, for instance, some gaseous detectors, require a high precision simulation of electromagnetic physics processes, accounting not only for the primary interactions of particles with matter, but also capable of describing the secondary effects deriving from the de-excitation of atoms, where primary collisions may have created vacancies. The...
    Go to contribution page
  52. E. Hjort (LAWRENCE BERKELEY LABORATORY)
    27/09/2004, 16:50
    Track 4 - Distributed Computing Services
    oral presentation
    The STAR experiment utilizes two major computing facilities for its data processing needs - the RCF at Brookhaven and the PDSF at LBNL/NERSC. The sharing of data between these facilities utilizes data grid services for file replication, and the deployment of these services was accomplished in conjunction with the Particle Physics Data Grid (PPDG). For STAR's 2004 run it will be...
    Go to contribution page
  53. P. Calafiura (LBNL)
    27/09/2004, 16:50
    Track 3 - Core Software
    oral presentation
    Athena is the Atlas Control Framework, based on the common Gaudi architecture, originally developed by LHCb. In 2004 two major production efforts, the Data Challenge 2 and the Combined Test-beam reconstruction and analysis were structured as Athena applications. To support the production work we have added new features to both Athena and Gaudi: an "Interval of Validity" service to manage...
    Go to contribution page
  54. D. Winter (COLUMBIA UNIVERSITY)
    27/09/2004, 16:50
    Track 1 - Online Computing
    oral presentation
    The PHENIX detector consists of 14 detector subsystems. It is designed such that individual subsystems can be read out independently in parallel as well as a single unit. The DAQ used to read the detector is a highly-pipelined parallel system. Because PHENIX is interested in rare physics events, the DAQ is required to have a fast trigger, deep buffering, and very high bandwidth. The...
    Go to contribution page
  55. O. Tatebe (GRID TECHNOLOGY RESEARCH CENTER, AIST)
    27/09/2004, 17:10
    Track 6 - Computer Fabrics
    oral presentation
    Gfarm v2 is designed for facilitating reliable file sharing and high-performance distributed and parallel data computing in a Grid across administrative domains by providing a Grid file system. A Grid file system is a virtual file system that federates multiple file systems. It is possible to share files or data by mounting the virtual file system. This paper discusses the design...
    Go to contribution page
  56. Dr T. Koi (SLAC)
    27/09/2004, 17:10
    Track 2 - Event processing
    oral presentation
    The transportation of ions in matter is subject of much interest in not only high-energy ion-ion collider experiments such as RHIC and LHC but also many other field of science, engineering and medical applications. Geant4 is a tool kit for simulation of passage of particles through matter and its OO designs makes it easy to extend its capability for ion transports. To simulate ions...
    Go to contribution page
  57. Ofer RIND
    27/09/2004, 17:10
    Track 4 - Distributed Computing Services
    oral presentation
    Providing Grid applications with effective access to large volumes of data residing on a multitude of storage systems with very different characteristics prompted the introduction of storage resource managers (SRM). Their purpose is to provide consistent and efficient wide-area access to storage resources unconstrained by their particular implementation (tape, large disk arrays,...
    Go to contribution page
  58. F. Carminati (CERN)
    27/09/2004, 17:10
    Track 3 - Core Software
    oral presentation
    The ALICE collaboration at the LHC is developing since 1998 an OO offline framework, written entirely in C++. In 2001 a GRID system (AliEn - ALICE Environment) has been added and successfully integrated with ROOT and the offline. The resulting combination allows ALICE to do most of the design of the detector and test the validity of its computing model by performing large scale Data...
    Go to contribution page
  59. D Chapin (Brown University)
    27/09/2004, 17:10
    Track 1 - Online Computing
    oral presentation
    The DZERO Level 3 Trigger and Data Aquisition (L3DAQ) system has been running continuously since Spring 2002. DZERO is loacated at one of the two interaction points in the Fermilab Tevatron Collider. The L3DAQ moves front-end readout data from VME crates to a trigger processor farm. It is built upon a Cisco 6509 Ethernet switch, standard PCs, and commodity VME single board computers. We...
    Go to contribution page
  60. R. Pordes (FERMILAB)
    27/09/2004, 17:10
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The U.S.LHC Tier-1 and Tier-2 laboratories and universities are developing production Grids to support LHC applications running across a worldwide Grid computing system. Together with partners in computer science, physics grid projects and running experiments, we will build a common national production grid infrastructure which is open in its architecture, implementation and use. The...
    Go to contribution page
  61. 27/09/2004, 17:30
    Track 2 - Event processing
    oral presentation
    A version of the Bertini cascade model for hadronic interactions is part of the Geant4 toolkit, and may be used to simulate pion-, proton-, and neutron-induced reactions in nuclei. It is typically valid for incident energies of 10 GeV and below, making it especially useful for the simulation of hadronic calorimeters. In order to generate the intra-nuclear cascade, the code depends...
    Go to contribution page
  62. I. Osborne (Northeastern University, Boston, USA)
    27/09/2004, 17:30
    Track 3 - Core Software
    oral presentation
    We present a composite framework which exploits the advantages of the CMS data model and uses a novel approach for building CMS simulation, reconstruction, visualisation and future analysis applications. The framework exploits LCG SEAL and CMS COBRA plug-ins and extends the COBRA framework to pass communications between the GUI and event threads, using SEAL callbacks to navigate...
    Go to contribution page
  63. C. CIOFFI (Oxford University)
    27/09/2004, 17:30
    Track 4 - Distributed Computing Services
    oral presentation
    The LHCb experiment needs to store all the information about the datasets and their processing history of recorded data resulting from particle collisions at the LHC collider at CERN as well as of simulated data. To achieve this functionality a design based on data warehousing techniques was chosen, where several user-services can be implemented and optimized individually without...
    Go to contribution page
  64. Y. CHENG (COMPUTING CENTER,INSTITUTE OF HIGH ENERGY PHYSICS,CHINESE ACADEMY OF SCIENCES)
    27/09/2004, 17:30
    Track 6 - Computer Fabrics
    oral presentation
    With the development of Linux and improvement of PC's performance, PC cluster used as high performance computing system is becoming much popular. The performance of I/O subsystem and cluster file system is critical to a high performance computing system. In this work the basic characteristics of cluster file systems and their performance are reviewed. The performance of four...
    Go to contribution page
  65. M. ZUREK (CERN, IFJ KRAKOW)
    27/09/2004, 17:30
    Track 1 - Online Computing
    oral presentation
    The talk presents the experience gathered during the testbed administration (~100 PC and 15+ switches) for the ATLAS Experiment at CERN. It covers the techniques used to resolve the HW/SW conflicts, network related problems, automatic installation and configuration of the cluster nodes as well as system/service monitoring in the heterogeneous dynamically changing...
    Go to contribution page
  66. S. Dasu (UNIVERSITY OF WISCONSIN)
    27/09/2004, 17:30
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The University of Wisconsin distributed computing research groups developed a software system called Condor for high throughput computing using commodity hardware. An adaptation of this software, Condor-G, is part of Globus grid computing toolkit. However, original Condor has additional features that allows building of an enterprise level grid. Several UW departments have Condor computing...
    Go to contribution page
  67. A. Lyon (FERMI NATIONAL ACCELERATOR LABORATORY)
    27/09/2004, 17:50
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The SAMGrid team has recently refactored its test harness suite for greater flexibility and easier configuration. This makes possible more interesting applications of the test harness, for component tests, integration tests, and stress tests. We report on the architecture of the test harness and its recent application to stress tests of a new analysis cluster at Fermilab, to explore...
    Go to contribution page
  68. T. Mkrtchyan (DESY)
    27/09/2004, 17:50
    Track 6 - Computer Fabrics
    oral presentation
    After successful implementation and deployment of the dCache system over the last years, one of the additional required services, the namespace service, is faced additional and completely new requirements. Most of these are caused by scaling the system, the integration with Grid services and the need for redundant (high availability) configurations. The existing system, having only...
    Go to contribution page
  69. M. Kosov (CERN)
    27/09/2004, 17:50
    Track 2 - Event processing
    oral presentation
    Quark-gluon strings are usually fragmented on the light cone in hadrons (PITHIA, JETSET) or in small hadronic clusters which decay in hadrons (HERWIG). In both cases the transverse momentum distribution is parameterized as an unknown function. In CHIPS the colliding hadrons stretch Pomeron ladders to each other and, when the Pomeron ladders meet in the rapidity space, they create Quasmons...
    Go to contribution page
  70. K. Nienartowicz (CERN)
    27/09/2004, 17:50
    Track 4 - Distributed Computing Services
    oral presentation
    Data management is one of the cornerstones in the distributed production computing environment that the EGEE project aims to provide for a European e-Science infrastructure. We have designed a set of services based on previous experience in other Grid projects, trying to address the requirements of our user communities. In this paper we summarize the most fundamental requirements and...
    Go to contribution page
  71. T. DeYoung (UNIVERSITY OF MARYLAND)
    27/09/2004, 17:50
    Track 3 - Core Software
    oral presentation
    IceCube is a cubic kilometer-scale neutrino telescope under construction at the South Pole. The minimalistic nature of the instrument poses several challenges for the software framework. Events occur at random times, and frequently overlap, requiring some modifications of the standard event-based processing paradigm. Computational requirements related to modeling the detector medium...
    Go to contribution page
  72. M. Dobson (CERN)
    27/09/2004, 17:50
    Track 1 - Online Computing
    oral presentation
    The ATLAS collaboration had a Combined Beam Test from May until October 2004. Collection and analysis of data required integration of several software systems that are developed as prototypes for the ATLAS experiment, due to start in 2007. Eleven different detector technologies were integrated with the Data Acquisition system and were taking data synchronously. The DAQ was integrated...
    Go to contribution page
  73. A. Shevel (STATE UNIVERSITY OF NEW YORK AT STONY BROOK)
    27/09/2004, 18:10
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The PHENIX collaboration records large volumes of data for each experimental run (now about 1/4 PB/year). Efficient and timely analysis of this data can benefit from a framework for distributed analysis via a growing number of remote computing facilities in the collaboration. The grid architecture has been, or is being deployed at most of these facilities. The experience being...
    Go to contribution page
  74. G. unel (UNIVERSITY OF CALIFORNIA AT IRVINE AND CERN)
    27/09/2004, 18:10
    Track 1 - Online Computing
    oral presentation
    The ATLAS Trigger and DAQ system is designed to use the Region of Interest (RoI)mechanism to reduce the initial Level 1 trigger rate of 100 kHz down to about 3.3 kHz Event Building rate. The DataFlow component of the ATLAS TDAQ system is responsible for the reading of the detector specific electronics via 1600 point to point readout links, the collection and provision of RoI to the...
    Go to contribution page
  75. R. Kennedy (FERMI NATIONAL ACCELERATOR LABORATORY)
    27/09/2004, 18:10
    Track 4 - Distributed Computing Services
    oral presentation
    SAMGrid is the shared data handling framework of the two large Fermilab Run II collider experiments: DZero and CDF. In production since 1999 at D0, and since mid-2004 at CDF, the SAMGrid framework has been adapted over time to accommodate a variety of storage solutions and configurations, as well as the differing data processing models of these two experiments. This has been...
    Go to contribution page
  76. Dr P. Spentzouris (FERMI NATIONAL ACCELERATOR LABORATORY)
    27/09/2004, 18:10
    Track 2 - Event processing
    oral presentation
    Computer simulations play a crucial role in both the design and operation of particle accelerators. General tools for modeling single-particle accelerator dynamics have been in wide use for many years. Multi-particle dynamics are much more computationally demanding than single-particle dynamics, requiring supercomputers or parallel clusters of PCs. Because of this, simulations of...
    Go to contribution page
  77. L. Nellen (I. DE CIENCIAS NUCLEARES, UNAM)
    27/09/2004, 18:10
    Track 3 - Core Software
    oral presentation
    The Pierre Auger Observatory is designed to unveil the nature and the origin of the highest energy cosmic rays. Two sites, one currently under construction in Argentina, and another pending in the Northern hemisphere, will observe extensive air showers using a hybrid detector comprising a ground array of 1600 water Cerenkov tanks overlooked by four atmospheric fluorescence detectors. ...
    Go to contribution page
  78. Les Robertson (CERN)
    28/09/2004, 08:30
    Plenary Sessions
    oral presentation
    The talk will cover briefly the current status of the LHC Computing Grid project and will discuss the main challenges facing us as we prepare for the startup of LHC.
    Go to contribution page
  79. I. Bird (CERN)
    28/09/2004, 09:00
    Plenary Sessions
    oral presentation
    In September 2003 the first LCG-1 service was put into production at most of the large Tier 1 sites and was quickly expanded up to 30 Tier 1 and Tier 2 sites by the end of the year. Several software upgrades were made and the LCG-2 service was put into production in time for the experiment data challenges that began in February 2004 and continued for several months. In particular...
    Go to contribution page
  80. 28/09/2004, 09:30
    Plenary Sessions
    oral presentation
    The U.S. Trillium Grid projects in collaboration with High Energy Experiment groups from the Large Hadron Collider (LHC), ATLAS and CMS, Fermi-Lab's BTeV, members of the LIGO , SDSS collaborations and groups from other scientific disciplines and computational centers have deployed a multi-VO, application-driven grid laboratory ("Grid3"). The grid laboratory has sustained for several...
    Go to contribution page
  81. Z. Toteva (Sofia University/CERN/CMS)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    We describe a database solution in a web application to centrally manage the configuration information of computer systems. It extends the modular cluster management tool Quattor with a user friendly web interface. System configurations managed by Quattor are described with the aid of PAN, a declarative language with a command line and a compiler interface. Using a relational schema,...
    Go to contribution page
  82. A. Bobyshev (FERMILAB)
    28/09/2004, 10:00
    Track 7 - Wide Area Networking
    poster
    In a large campus network, such as Fermilab's ten thousand nodes, scanning initiated from either outside of or within the campus network raises security concerns, may have very serious impact on network performance, and even disrupt normal operation of many services. In this paper we introduce a system for detecting and automatic blocking of excessive traffic of different nature, scanning,...
    Go to contribution page
  83. Martin purschke
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    With the improvements in CPU and disk speed over the past years, we were able to exceed the original design data logging rate of 40MB/s by a factor of 3 already for the Run 3 in 2002. For the Run 4 in 2003, we increased the raw disk logging capacity further to about 400MB/s. Another major improvement was the implementation of compressed data logging. The PHENIX raw data, after...
    Go to contribution page
  84. M. Guijarro (CERN)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    There are two cluster architecture approaches used at CERN to provide central CVS services. The first one (http://cern.ch/cvs) depends on AFS for central storage of repositories and offers automatic load-balancing and fail-over mechanisms. The second one (http://cern.ch/lcgcvs) is an N + 1 cluster based on local file systems, using data replication and not relying on AFS. It does not...
    Go to contribution page
  85. Martin purschke
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    The PHENIX DAQ system is managed by a control system responsible for the configuration and monitoring of the PHENIX detector hardware and readout software. At its core, the control system, called Runcontrol, is a process that manages the various components by way of a distributed architecture using CORBA. The control system, called Runcontrol, is a set of process that manages virtually...
    Go to contribution page
  86. J. Schmidt (Fermilab)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    Email is an essential part of daily work. The FNAL gateways process in excess of 700,000 messages per week. Amomng those messages are many containing viruses and unwanted spam. This paper outlines the FNAL email system configuration. We will discuss how we have defined our systems to provide optimum uptime as well as protection against viruses, spam and unauthorized users.
    Go to contribution page
  87. L. Lisa Giacchetti (FERMILAB)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    The scalable serving of shared filesystems across large clusters of computing resources continues to be a difficult problem in high energy physics computing. The US CMS group at Fermilab has performed a detailed evaluation of hardware and software solutions to allow filesysystem access to data from computing systems. The goal of the evaluation was to arrive at a solution that was able...
    Go to contribution page
  88. S. Kolos (CERN)
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    As modern High Energy Physics (HEP) experiments require more distributed computing power to fulfill their demands, the need for an efficient distributed online services for control, configuration and monitoring in such experiments becomes increasingly important. This paper describes the experience of using standard Common Object Request Broker Architecture (CORBA) middleware for...
    Go to contribution page
  89. J. Fromm (Fermilab)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    The NGOP Monitoring Project at FNAL has developed a package which has demonstrated the capability to efficiently monitor tens of thousands of entities on thousands of hosts, and has been in operation for over 4 years. The project has met the majority of its initial reqirements, and also the majority of the requirements discovered along the way. This paper will describe what worked, and...
    Go to contribution page
  90. S. Jarp (CERN)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    In 1995 I predicted that the dual-processor PC would start invading HEP computing and a couple of years later the x86-based PC was omnipresent in our computing facilities. Today, we cannot imagine HEP computing without thousands of PCs at the heart. This talk will look at some of the reasons why we may one day be forced to leave this sweet-spot. This would be not because we (the HEP...
    Go to contribution page
  91. F.M. Taurino (INFM - INFN)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    The "gridification" of a computing farm is usually a complex and time consuming task. Operating system installation, grid specific software, configuration files customization can turn into a large problem for site managers. This poster introduces InGRID, a solution used to install and maintain grid software on small/medium size computing farms. Grid elements installation with InGRID...
    Go to contribution page
  92. G. Sun (INSTITUE OF HIGH ENERGY PHYSICS)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    These are several on-going experiments at IHEP, such as BES, YBJ, and CMS collaboration with CERN. each experiment has its own computing system, these computing systems run separately. This leads to a very low CPU utilization due to different usage period of each experiment. The Grid technology is a very good candidate for integrating these separate computing systems into a "single...
    Go to contribution page
  93. H. Schwarthoff (CORNELL UNIVERSITY)
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    The CLEO collaboration at the Cornell electron positron storage ring CESR has completed its transition to the CLEO-c experiment. This new program contains a wide array of Physics studies of $e^+e^-$ collisions at center of mass energies between 3 GeV and 5 GeV. New challenges await the CLEO-c Online computing system, as the trigger rates are expected to rise from < 100 Hz to around...
    Go to contribution page
  94. N. Hoeimyr (CERN IT)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    The Product Support (PS) group of the IT department at CERN distributes and supports more than one hundred different software packages, ranging from tools for computer aided design, field calculations, mathematical and structural analysis to software development. Most of these tools, which are used on a variety of Unix and Windows platforms by different user populations, are...
    Go to contribution page
  95. A. Bobyshev (FERMILAB)
    28/09/2004, 10:00
    Track 7 - Wide Area Networking
    poster
    Network flow data gathered on border routers and core network switch/routers is used at Fermilab for statistical analysis of traffic patterns, passive network monitoring, and estimation of network performance characteristics. Flow data is also a critical tool in the investigation of computer security incidents. Development and enhancement of flow- based tools is on-going effort. The...
    Go to contribution page
  96. I. Sfiligoi (INFN Frascati)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    CDF is deploying a version of its analysis facility (CAF) at several globally distributed sites. On top of the hardware at each of these sites is either an FBSNG or Condor batch manager and a SAM data handling system which in some cases also makes use of dCache. The jobs which run at these sites also make use of a central database located at Fermilab. Each of these systems has its own...
    Go to contribution page
  97. N. Katayama (KEK)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity now exceeds 800pb- 1. These numbers correspond to more than 1PB of raw and processed data stored on tape and an accumulation of the raw data at the rate of 1TB/day. To meet these storage demands, a new cost effective, compact hierarchical mass storage system has...
    Go to contribution page
  98. Martin purschke
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    The PHENIX experiment consists of many different detectors and detector types, each one with its own needs concerning the monitoring of the data quality and the calibration. To ease the task for the shift crew to monitor the performance and status of each subsystem in PHENIX we developed a general client server based framework which delivers events at a rate in excess of 100Hz....
    Go to contribution page
  99. S. Nemnyugin (ASSOCIATE PROFESSOR)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    We report the results of parallelization and tests of the Parton String Model event generator at the parallel cluster of St.Petersburg State University Telecommunication center. Two schemes of parallelization were studied. In the first approach master process coordinates work of slave processes, gathers and analyzes data. Results of MC calculations are saved in local files. Local...
    Go to contribution page
  100. J. Schmidt (Fermilab)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    FNAL has over 5000 PCs running either Linux or Windows software. Protecting these systems efficiently against the latest vulnerabilities that arise has prompted FNAL to take a more central approach to patching systems. We outline the lab support structure for each OS and how we have provided a central solution that works within existing support boundaries. The paper will cover how we...
    Go to contribution page
  101. P. Conde MUINO (CERN)
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    During the runtime of any experiment, a central monitoring system that detects problems as soon as they appear has an essential role. In a large experiment, like Atlas, the online data acquisition system is distributed across the nodes of large farms, each of them running several processes that analyse a fraction of the events. In this architecture, it is necessary to have a central...
    Go to contribution page
  102. A. Eleuteri (DIPARTIMENTO DI SCIENZE FISICHE - UNIVERSITร  DI NAPOLI FEDERICO II)
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    In this paper we examine the performance of the raw Ethernet protocol in deterministic, low-cost, real-time communication. Very few applications have been reported until now, and they focus on the use of the TCP and UDP protocols, which however add a sensible overhead to the communication and reduce the useful bandwidth. We show how low-level Ethernet access can be used for...
    Go to contribution page
  103. 28/09/2004, 10:00
    Track 7 - Wide Area Networking
    poster
    The CLEO III data acquisition was from the beginning in the late 90's designed to allow remote operations and monitoring of the experiment. Since changes in the coordination and operation of the CLEO experiment two years ago enabled us to separate tasks of the shift crew into an operational and a physics task, existing remote capabilities have been revisited. In 2002/03 CLEO started to...
    Go to contribution page
  104. A. Garcia (KARLSRUHE RESEARCH CENTER (FZK))
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    The clusters using DataGrid middleware are usually installed and managed by means of an "LCFG" server. Originally developed by the Univ. of Edinburgh and extended by DataGrid, this is a complex piece of software. It allows for automated installation and configuration of a complete grid site. However, installation of the "LCFG"-Server takes most of the time, thus hinder widespread...
    Go to contribution page
  105. V. GAUTARD (CEA-SACLAY)
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    ATLAS is a particle detector which will is being built at CERN in Geneva. The muon detection system is made up among other things, of 600 chambers measuring 2 to 6 m2 and 30 cm thick. The chambers' position must be known with an accuracy of +/- 30 m for translations and +/-100 rad for rotations for a range of +/- 5mm and +/-5mrad. In order to fulfill these requirements, we have...
    Go to contribution page
  106. G. unel (UNIVERSITY OF CALIFORNIA AT IRVINE AND CERN)
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    The 40 MHz collision rate at the LHC produces ~25 interactions per bunch crossing within the ATLAS detector, resulting in terabytes of data per second to be handled by the detector electronics and the trigger and DAQ system. A Level 1 trigger system based on custom designed and built electronics will reduce the event rate to 100 kHz. The DAQ system is responsible for the readout of the...
    Go to contribution page
  107. Ian FISK (FNAL)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    US-CMS is building up expertise at regional centers in preparation for analysis of LHC data. The User Analysis Farm (UAF) is part of the Tier 1 facility at Fermilab. The UAF is being developed to support the efforts of the Fermilab LHC Physics Center (LPC) and to enableefficient analysis of CMS data in the US. The support, infrastructure, and services to enable a local analysis...
    Go to contribution page
  108. 28/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The CDF Analysis Facility (CAF) has been in use since April 2002 and has successfully served 100s of users on 1000s of CPUs. The original CAF used FBSNG as a batch manager. In the current trend toward multisite deployment, FBSNG was found to be a limiting factor, so the CAF has been reimplemented to use Condor instead. Condor is a more widely used batch system and is well integrated...
    Go to contribution page
  109. I. Soloviev (CERN/PNPI)
    28/09/2004, 10:00
    Track 1 - Online Computing
    poster
    The ATLAS data acquisition system uses the database to describe configurations for different types of data taking runs and different sub-detectors. Such configurations are composed of complex data objects with many inter-relations. During the DAQ system initialisation phase the configurations database is simultaneously accessed by a large number of processes. It is also required that such...
    Go to contribution page
  110. A. Martin (QUEEN MARY, UNIVERSITY OF LONDON)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    We describe our experience in building a cost efficient High Throughput Cluster (HTC) using commodity hardware and free software within a university environment. Our HTC has a modular system architecture and is designed to be upgradable. The current, second phase configuration, consists of 344 processors and 20 Tbyte of RAID storage. In order to rapidly install and upgrade software,...
    Go to contribution page
  111. O. Schneider (FZK)
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    A central idea of Grid Computing is the virtualization of heterogeneous resources. To meet this challenge the Institute for Scientific Computing, IWR, has started the project CampusGrid. Its medium term goal is to provide a seamless IT environment supporting the on-site research activities in physics, bioinformatics, nanotechnology and meteorology. The environment will include all...
    Go to contribution page
  112. Alan Tackett
    28/09/2004, 10:00
    Track 6 - Computer Fabrics
    poster
    Protein analysis, imaging, and DNA sequencing are some of the branches of biology where growth has been enabled by the availability of computational resources. With this growth, biologists face an associated need for reliable, flexible storage systems. For decades the HEP community has been driving the development of such storage systems to meet their own needs. Two of these systems -...
    Go to contribution page
  113. A. Bobyshev (FERMILAB)
    28/09/2004, 10:00
    Track 7 - Wide Area Networking
    poster
    The Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider (LHC) is scheduled to come on-line in 2007. Fermilab will act as the CMS Tier-1 center for the US and make experiment data available to more than 400 researchers in the US participating in the CMS experiment. The US CMS Users Facility group, based at Fermilab, has initiated a project to develop a model for...
    Go to contribution page
  114. M. Ellisman (National Center for Microscopy and Imaging Research of the Center for Research in Biological Systems - The Department of Neurosciences, University of California San Diego School of Medicine - La Jolla, California - USA)
    28/09/2004, 11:00
    Plenary Sessions
    oral presentation
    The grand goal in neuroscience research is to understand how the interplay of structural, chemical and electrical signals in nervous tissue gives rise to behavior. Experimental advances of the past decades have given the individual neuroscientist an increasingly powerful arsenal for obtaining data, from the level of molecules to nervous systems. Scientists have begun the arduous and...
    Go to contribution page
  115. David Kelsey (RAL)
    28/09/2004, 11:30
    Plenary Sessions
    oral presentation
    The aim of Grid computing is to enable the easy and open sharing of resources between large and highly distributed communities of scientists and institutes across many independent administrative domains. Convincing site security officers and computer centre managers to allow this to happen in view of today's ever-increasing Internet security problems is a major challenge. Convincing...
    Go to contribution page
  116. Ken Peach (RAL)
    28/09/2004, 12:00
    Plenary Sessions
    oral presentation
    Just as the development of the World Wide Web has had its greatest impact outside particle physics, so it will be with the development of the Grid. E-science, of which the Grid is just a small part, is already making a big impact upon many scientific disciplines, and facilitating new scientific discoveries that would be difficult to achieve in any other way. Key to this is the...
    Go to contribution page
  117. Max Lemke
    28/09/2004, 12:30
    Plenary Sessions
    oral presentation
    The European Grid Research vision as set out in the Information Society Technologies Work Programmes of the EU's Sixth Research Framework Programme is to advance, consolidate and mature Grid technologies for widespread e-science, industrial, business and societal use. A batch of Grid research projects with 52 Million EUR EU support was launched during the European Grid Technology Days 15...
    Go to contribution page
  118. Miron Livny (Wisconsin)
    29/09/2004, 08:30
    Plenary Sessions
    oral presentation
    In the 18 months since the CHEP03 meeting in San Diego, the HEP community deployed the current generation of grid technologies in a veracity of settings. Legacy software as well as recently developed applications was interfaced with middleware tools to deliver end-to-end capabilities to HEP experiments in different stages of their life cycles. In a series of data challenges,...
    Go to contribution page
  119. Andrew Sutherland (ORACLE)
    29/09/2004, 09:00
    Plenary Sessions
    oral presentation
    Dr Sutherland will review the evolution of computing over the past decade, focusing particularly on the development of the database and middleware from client server to Internet computing. But what are the next steps from the perspective of a software company? Dr Sutherland will discuss the development of Grid as well as the future applications revolving around collaborative...
    Go to contribution page
  120. Jai Menon (IBM)
    29/09/2004, 09:30
    Plenary Sessions
    oral presentation
    In this talk, we will discuss the future of storage systems. In particular, we will focus on several big challenges which we are facing in storage, such as being able to build, manage and backup really massive storage systems, being able to find information of interest, being able to do long-term archival of data, and so on. We also present ideas and research being done to address...
    Go to contribution page
  121. T. Coviello (INFN Via E. Orabona 4 I - 70126 Bari Italy)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    A grid system is a set of heterogeneous computational and storage resources, distributed on a large geographic scale, which belong to different administrative domains and serve several different scientific communities named Virtual Organizations (VOs). A virtual organization is a group of people or institutions which collaborate to achieve common objectives. Therefore such system has...
    Go to contribution page
  122. G. Rubini (INFN-CNAF)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Analyzing Grid monitoring data requires the capability of dealing with multidimensional concepts intrinsic to Grid systems. The meaningful dimensions identified in recent works are the physical dimension referring to geographical location of resources, the Virtual Organization (VO) dimension, the time dimension and the monitoring metrics dimension. In this paper, we discuss the...
    Go to contribution page
  123. M. Jones (Manchester University)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The BaBar experiment has accumulated many terabytes of data on particle physics reactions, accessed by a community of hundreds of users. Typical analysis tasks are C++ programs, individually written by the user, using shared templates and libraries. The resources have outgrown a single platform and a distributed computing model is needed. The grid provides the natural toolset....
    Go to contribution page
  124. T. Coviello (DEE โ€“ POLITECNICO DI BARI, V. ORABONA, 4, 70125 โ€“ BARI,ITALY)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Grid computing is a large scale geographically distributed and heterogeneous system that provides a common platform for running different grid enabled applications. As each application has different characteristics and requirements, it is a difficult task to develop a scheduling strategy able to achieve optimal performance because application-specific and dynamic system status have...
    Go to contribution page
  125. The ARDA Team
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end...
    Go to contribution page
  126. D. Malon (ANL)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    As ATLAS begins validation of its computing model in 2004, requirements imposed upon ATLAS data management software move well beyond simple persistence, and beyond the "read a file, write a file" operational model that has sufficed for most simulation production. New functionality is required to support the ATLAS Tier 0 model, and to support deployment in a globally distributed...
    Go to contribution page
  127. L. Poncet (LAL-IN2p3)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    In the last few years grid software (middleware) has become available from various sources. However, there are no standards yet which allow for an easy integration of different services. Moreover, middleware was produced by different projects with the main goal of developing new functionalities rather than production quality software. In the context of the LHC Computing Grid...
    Go to contribution page
  128. T. Wlodek (Brookhaven National Lab)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    A description of a Condor-based, Grid-aware batch software system configured to function asynchronously with a mass storage system is presented. The software is currently used in a large Linux Farm (2700+ processors) at the RHIC and ATLAS Tier 1 Computing Facility at Brookhaven Lab. Design, scalability, reliability, features and support issues with a complex Condor-based batch...
    Go to contribution page
  129. A. Wagner (CERN)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    CERN has about 5500 Desktop PCs. These computers offer a large pool of resources that can be used for physics calculations outside office hours. The paper describes a project to make use of the spare CPU cycles of these PCs for LHC tracking studies. The client server application is implemented as a lightweight, modular screensaver and a Web Application containing the physics job...
    Go to contribution page
  130. P. Love (Lancaster University)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Building on several years of sucess with the MCRunjob projects at DZero and CMS, the fermilab sponsored joint Runjob project aims to provide a Workflow description language common to three experiments: DZero, CMS and CDF. This project will encapsulate the remote processing experiences of the three experiments in an extensible software architecture using web services as...
    Go to contribution page
  131. T. Harenberg (UNIVERSITY OF WUPPERTAL)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The D0 experiment at the Tevatron is collecting some 100 Terabytes of data each year and has a very high need of computing resources for the various parts of the physics program. D0 meets these demands by establishing a world - increasingly based on GRID technologies. Distributed resources are used for D0 MC production and data reprocessing of 1 billion events, requiring 250 TB to be...
    Go to contribution page
  132. O. Smirnova (Lund University, Sweden)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    In common grid installations, services responsible for storing big data chunks, replication of those data and indexing their availability are usually completely decoupled. And a task of synchronizing data is passed to either user-level tools or separate services (like spiders) which are subject to failure and usually cannot perform properly if one of underlying services fails too. The...
    Go to contribution page
  133. D. Wicke (Fermilab)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    Abstract: The D0 experiment faces many challenges enabling access to large datasets for physicists on 4 continents. The strategy of solving these problems on worlwide distributed computing clusters is followed. Already since the begin of TEvatron RunII (March 2001) all Monte-Carlo simulations are produced outside of Fermilab at remote systems. For analyses as system of regional...
    Go to contribution page
  134. L. Lueking (FERMILAB)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivery of the data to users and processing farms based around the world has represented major challenges to both experiments. The range of applications employing databases includes data management, calibration (conditions), trigger information, run...
    Go to contribution page
  135. S. Stonjek (Fermi National Accelerator Laboratory / University of Oxford)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    CDF is an experiment at the Tevatron at Fermilab. One dominating factor of the experiments' computing model is the high volume of raw, reconstructed and generated data. The distributed data handling services within SAM move these data to physics analysis applications. The SAM system was already in use at the D-Zero experiment. Due to difference in the computing model of the...
    Go to contribution page
  136. I. Stokes-Rees (UNIVERSITY OF OXFORD PARTICLE PHYSICS)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The DIRAC system developed for the CERN LHCb experiment is a grid infrastructure for managing generic simulation and analysis jobs. It enables jobs to be distributed across a variety of computing resources, such as PBS, LSF, BQS, Condor, Globus, LCG, and individual workstations. A key challenge of distributed service architectures is that there is no single point of control over...
    Go to contribution page
  137. V. garonne (CPPM-IN2P3 MARSEILLE)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The Workload Management System (WMS) is the core component of the DIRAC distributed MC production and analysis grid of the LHCb experiment. It uses a central Task database which is accessed via a set of central Services with Agents running on each of the LHCb sites. DIRAC uses a 'pull' paradigm where Agents request tasks whenever they detect their local resources are available. The...
    Go to contribution page
  138. M.G. Pia (INFN GENOVA)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    We show how nowadays it is possible to achieve the goal of accuracy and fast computation response in radiotherapic dosimetry using Monte Carlo methods, together with a distributed computing model. Monte Carlo methods have never been used in clinical practice because, even if they are more accurate than available commercial software, the calculation time needed to accumulate sufficient...
    Go to contribution page
  139. L. Guy (CERN)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Extensive and thorough testing of the EGEE middleware is essential to ensure that a production quality Grid can be deployed on a large scale as well as across the broad range of heterogeneous resources that make up the hundreds of Grid computing centres both in Europe and worldwide. Testing of the EGEE middleware encompasses the tasks of both verification and validation. In adition...
    Go to contribution page
  140. L. Matyska (CESNET, CZECH REPUBLIC)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The Logging and Bookkeeping service tracks job passing through the Grid. It collects important events generated by both the grid middleware components and applications, and processes them at a chosen L&B server to provide the job state. The events are transported through secure reliable channels. Job tracking is fully distributed and does not depend on a single information source, the...
    Go to contribution page
  141. P. Mendez Lorenzo (CERN IT/GD)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    In a Grid environment, the access to information on system resources is a necessity in order to perform common tasks such as matching job requirements with available resources, accessing files or presenting monitoring information. Thus both middleware service, like workload and data management, and applications, like monitoring tools, requiere an interface to the Grid information...
    Go to contribution page
  142. X. Zhao (Brookhaven National Laboratory)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    This paper describes the deployment and configuration of the production system for ATLAS Data Challenge 2 starting in May 2004, at Brookhaven National Laboratory, which is the Tier1 center in the United States for the International ATLAS experiment. We will discuss the installation of Windmill (supervisor) and Capone (executor) software packages on the submission host and the relevant...
    Go to contribution page
  143. R. santinelli (CERN/IT/GD)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The management of Application and Experiment Software represents a very common issue in emerging grid-aware computing infrastructures. While the middleware is often installed by system administrators at a site via customized tools that serve also for the centralized management of the entire computing facility, the problem of installing, configuring and validating Gigabytes of Virtual...
    Go to contribution page
  144. R. Walker (Simon Fraser University)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    A large number of Grids have been developed, motivated by geo-political or application requirements. Despite being mostly based on the same underlying middleware, the Globus Toolkit, they are generally not inter-operable for a variety of reasons. We present a method of federating those disparate grids which are based on the Globus Toolkit, together with a concrete example of interfacing...
    Go to contribution page
  145. V. Fine (BROOKHAVEN NATIONAL LABORATORY)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Most HENP experiment software includes a logging or tracing API allowing for displaying in a particular format important feedback coming from the core application. However, inserting log statements into the code is a low-tech method for tracing the program execution flow and often leads to a flood of messages in which the relevant ones are occluded. In a distributed computing...
    Go to contribution page
  146. R. Barbera (Univ. Catania and INFN Catania)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    Computational and data grids are now entering a more mature phase where experimental test-beds are turned into production quality infrastructures operating around the clock. All this is becoming true both at national level, where an example is the Italian INFN production grid (http://grid-it.cnaf.infn.it), and at the continental level, where the most strinking example is the European Union...
    Go to contribution page
  147. T. ANTONI (GGUS)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    For very large projects like the LHC Computing Grid Project (LCG) involving 8,000 scientists from all around the world, it is an indispensable requirement to have a well organized user support. The Institute for Scientific Computing at the Forschungszentrum Karlsruhe started implementing a Global Grid User Support (GGUS) after official assignment of the Grid Deployment Board in March...
    Go to contribution page
  148. A. Retico (CERN)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The installation and configuration of LCG middleware, as it is currently being done, is complex and delicate. An โ€œaccurateโ€ configuration of all the services of LCG middleware requires a deep knowledge of the inside dynamics and hundreds of parameters to be dealt with. On the other hand, the number of parameters and flags that are strictly needed in order to run a working โ€defaultโ€...
    Go to contribution page
  149. L. Field (CERN)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    This paper reports on the deployment experience of the defacto grid information system, Globus MDS, in a large scale production grid. The results of this experience led to the development of an information caching system based on a standard openLDAP database. The paper then describes how this caching system was developed further into a production quality information system including a...
    Go to contribution page
  150. H. Tallini (IMPERIAL COLLEGE LONDON)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    GROSS (GRidified Orca Submission System) has been developed to provide CMS end users with a single interface for running batch analysis tasks over the LCG-2 Grid. The main purpose of the tool is to carry out job splitting, preparation, submission, monitoring and archiving in a transparent way which is simple to use for the end user. Central to its design has been the requirement for...
    Go to contribution page
  151. A. Gellrich (DESY)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    DESY is one of the world-wide leading centers for research with particle accelerators and a center for research with synchrotron light. The hadron-electron collider HERA houses four experiments which are taking data and will be operated until 2006 at least. The computer center manages a data volumes of order 1 PB and is the home for around 1000 CPUs. In 2003 DESY started to set up a...
    Go to contribution page
  152. M. Burgon-Lyon (UNIVERSITY OF GLASGOW)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    JIM (Job and Information Management) is a grid extension to the mature data handling system called SAM (Sequential Access via Metadata) used by the CDF, DZero and Minos Experiments based at Fermilab. JIM uses a thin client to allow job submissions from any computer with Internet access, provided the user has a valid certificate or kerberos ticket. On completion the job output can be...
    Go to contribution page
  153. A. Anjum (NIIT)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    In the context of Interactive Grid-Enabled Analysis Environment (GAE), physicists desire bi-directional interaction with the job they submitted. In one direction, monitoring information about the job and hence a โ€œprogress barโ€ should be provided to them. On other direction, physicist should be able to control their jobs. Before submission, they may direct the job to some specified...
    Go to contribution page
  154. A. Anjum (NIIT)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Grid is emerging as a great computational resource but its dynamic behaviour makes the Grid environment unpredictable. System failure or network failure can occur or the system performance can degrade. So once the job has been submitted monitoring becomes very essential for user to ensure that the job is completed in an efficient way. In current environments once user submits a job he...
    Go to contribution page
  155. G. Donvito (UNIVERSITร  DEGLI STUDI DI BARI), G. Tortone (INFN Napoli)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    In a wide-area distributed and heterogeneous grid environment, monitoring represents an important and crucial task. It includes system status checking, performance tuning, bottlenecks detecting, troubleshooting, fault notifying. In particular a good monitoring infrastructure must provide the information to track down the current status of a job in order to locate any problems....
    Go to contribution page
  156. E.M.V. Fasanelli (I.N.F.N.)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The infn.it AFS cell has been providing a useful single file-space and authentication mechanism for the whole INFN, but the lack of a distributed management system, has lead several INFN sections and LABs to setup local AFS cells. The hierarchical transitive cross-realm authentication introduced in the Kerberos 5 protocol and the new versions of the OpenAFS and MIT implementation of...
    Go to contribution page
  157. D. Rebatto (INFN - MILANO)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    In this paper we present an overview of the implementation of the LCG interface for the ATLAS production system. In order to take profit of the features provided by DataGRID software, on which LCG is based, we implemented a Python module, seamless integrated into the Workload Management System, which can be used as an object-oriented API to the submission services. On top of it we...
    Go to contribution page
  158. L. Tuura (NORTHEASTERN UNIVERSITY, BOSTON, MA, USA)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Experiments frequently produce many small data files for reasons beyond their control, such as output splitting into physics data streams, parallel processing on large farms, database technology incapable of concurrent writes into a single file, and constraints from running farms reliably. Resulting data file size is often far from ideal for network transfer and mass storage performance....
    Go to contribution page
  159. S. Thorn
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The University of Edinburgh has an significant interest in mass storage systems as it is one of the core groups tasked with the roll out of storage software for the UK's particle physics grid, GridPP. We present the results of a development project to provide software interfaces between the SDSC Storage Resource Broker, the EU DataGrid and the Storage Resource Manager. This project was...
    Go to contribution page
  160. I. Legrand (CALTECH)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The design and optimization of the Computing Models for the future LHC experiments, based on the Grid technologies, requires a realistic and effective modeling and simulation of the data access patterns, the data flow across the local and wide area networks, and the scheduling and workflow created by many concurrent, data intensive jobs on large scale distributed systems. This paper...
    Go to contribution page
  161. E. Berman (FERMILAB)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Fermilab operates a petabyte scale storage system, Enstore, which is the primary data store for experiments' large data sets. The Enstore system regularly transfers greater than 15 Terabytes of data each day. It is designed using a client-server architecture providing sufficient modularity to allow easy addition and replacement of hardware and software components. Monitoring of this...
    Go to contribution page
  162. G. Zito (INFN BARI)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The complexity of the CMS Tracker (more than 50 million channels to monitor) now in construction in ten laboratories worldwide with hundreds of interested people , will require new tools for monitoring both the hardware and the software. In our approach we use both visualization tools and Grid services to make this monitoring possible. The use of visualization enables us to represent...
    Go to contribution page
  163. D. Sanders (UNIVERSITY OF MISSISSIPPI)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent...
    Go to contribution page
  164. I. Adachi (KEK)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity has exceeded 800pb-1. This requires more efficient and reliable way of event processing. To meet this requirement, new offline processing scheme has been constructed, based upon technique employed for the Belle online reconstruction farm. Event processing is...
    Go to contribution page
  165. E. Berdnikov (INSTITUTE FOR HIGH ENERGY PHYSICS, PROTVINO, RUSSIA)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The scope of this work is the study of scalability limits of the Certification Authority (CA), running for large scale GRID environments. The operation of Certification Authority is analyzed from the view of the rate of incoming requests, complexity of authentication procedures, LCG security restrictions and other limiting factors. It is shown, that standard CA operational...
    Go to contribution page
  166. C. Nicholson (UNIVERSITY OF GLASGOW)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    In large-scale Grids, the replication of files to different sites is an important data management mechanism which can reduce access latencies and give improved usage of resources such as network bandwidth, storage and computing power. In the search for an optimal data replication strategy, the Grid simulator OptorSim was developed as part of the European DataGrid project. Simulations of...
    Go to contribution page
  167. G. Shabratova (Joint Institute for Nuclear Research (JINR))
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The report presents an analysis of the Alice Data Challenge 2004. This Data Challenge has been performed on two different distributed computing environments. The first one is the Alice Environment for distributed computing (AliEn) used standalone. Presently this environment allows ALICE physicists to obtain results on simulation, reconstruction and analysis of data in ESD format for...
    Go to contribution page
  168. S. Mrenna (FERMILAB)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    PATRIOT is a project that aims to provide better predictions of physics events for the high-Pt physics program of Run2 at the Tevatron collider. Central to Patriot is an enstore or mass storage repository for files describing the high-Pt physics predictions. These are typically stored as StdHep files which can be handled by CDF and D0 and run through detector and triggering...
    Go to contribution page
  169. B. Quinn (The University of Mississippi)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The D0 experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed the capabilities of any one institution. Moreover, the widely scattered geographical distribution of collaborators poses further serious...
    Go to contribution page
  170. A. Anjum (NIIT)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Grid computing provides key infrastructure for distributed problem solving in dynamic virtual organizations. However, Grids are still the domain of a few highly trained programmers with expertise in networking, high-performance computing, and operating systems. One of the big issues in the full-scale usage of a grid is the matching of the resource requirements of a job submission to...
    Go to contribution page
  171. 29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    For The BaBar Computing Group BaBar has recently moved away from using Objectivity/DB for it's event store towards a ROOT-based event store. Data in the new format is produced at about 20 institutions worldwide as well as at SLAC. Among new challenges are the organization of data export from remote institutions, archival at SLAC and making the data visible to users for analysis and...
    Go to contribution page
  172. A. Hasan (SLAC)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    We describe the production experience gained from implementing and using exclusively the San Diego Super Computer Center developed Storage Resource Broker (SRB) to distribute the BaBar experiment's production event data stored in ROOT files from the experiment center at SLAC, California, USA to a Tier A computing center at ccinp3, Lyon France. In addition we outline how the system can...
    Go to contribution page
  173. D. Andreotti (INFN Sezione di Ferrara)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The BaBar experiment has been taking data since 1999. In 2001 the computing group started to evaluate the possibility to evolve toward a distributed computing model in a Grid environment. In 2003, a new computing model, described in other talks, was implemented, and ROOT I/O is now being used as the Event Store. We implemented a system, based onthe LHC Computing Grid (LCG) tools, to submit...
    Go to contribution page
  174. I. Terekhov (FERMI NATIONAL ACCELERATOR LABORATORY)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    SAMGrid is a globally distributed system for data handling and job management, developed at Fermilab for the D0 and CDF experiments in Run II. The Condor system is being developed at the University of Wisconsin for management of distributed resources, computational and otherwise. We briefly review the SAMGrid architecture and its interaction with Condor, which was presented earlier. We...
    Go to contribution page
  175. A. Lyon (FERMI NATIONAL ACCELERATOR LABORATORY)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The SAMGrid team is in the process of implementing a monitoring and information service, which fulfills several important roles in the operation of the SAMGrid system, and will replace the first generation of monitoring tools in the current deployments. The first generation tools are in general based on text logfiles and represent solutions which are not scalable or maintainable. The...
    Go to contribution page
  176. E. Slabospitskaya (Institute for High Energy Physics,Protvino,Russia)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Storage Resource Manager (SRM) and Grid File Access Library (GFAL) are GRID middleware components used for transparent access to Storage Elements. SRM provides a common interface (WEB service) to backend systems giving dynamic space allocation and file management. GFAL provides a mechanism whereby an application software can access a file at a site without having to know which transport...
    Go to contribution page
  177. V. Bartsch (OXFORD UNIVERSITY)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    To distribute computing for CDF (Collider Detector at Fermilab) a system managing local compute and storage resources is needed. For this purpose CDF will use the DCAF (Decentralized CDF Analysis Farms) system which is already at Fermilab. DCAF has to work with the data handling system SAM (Sequential Access to data via Metadata). However, both DCAF and SAM are mature systems which...
    Go to contribution page
  178. R. JONES (LANCAS)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The ATLAS Computing Model is under continuous active development. Previous exercises focussed on the Tier-0/Tier-1 interactions, with an emphasis on the resource implications and only a high-level view of the data and workflow. The work presented here considerably revises the resource implications, and attempts to describe in some detail the data and control flow from the High Level...
    Go to contribution page
  179. Douglas Smith (Stanford Linear Accelerator Center)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The new BaBar bookkeeping system comes with tools to directly support data analysis tasks. This Task Manager system acts as an interface between datasets defined in the bookkeeping system, which are used as input to analyzes, and the offline analysis framework. The Task Manager organizes the processing of the data by creating specific jobs to be either submitted to a batch system, or...
    Go to contribution page
  180. A. Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The D0 experiment relies on large scale computing systems to achieve her physics goals. As the experiment lifetime spans, multiple generations of computing hardware, it is fundemental to make projective models in to use available resources to meet the anticipated needs. In addition, computing resources can be supplied as in-kind contributions by collaborating institutions and...
    Go to contribution page
  181. C. ARNAULT (CNRS)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    One of the most important problems in software management of a very large and complex project such as Atlas is how to deploy the software on the running sites. By running sites we include computer sites ranging from computing centers in the usual sense down to individual laptops but also the computer elements of a computing grid organization. The deployment activity consists in...
    Go to contribution page
  182. S. Bagnasco (INFN Torino)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    AliEn (ALICE Environment) is a GRID middleware developed and used in the context of ALICE, the CERN LHC heavy-ion experiment. In order to run Data Challenges exploiting both AliEn โ€œnativeโ€ resources and any infrastructure based on EDG-derived middleware (such as the LCG and the Italian GRID.IT), an interface system was designed and implemented; some details of a prototype were already...
    Go to contribution page
  183. J. kennedy (LMU Munich)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    This paper presents an overview of the legacy interface provided for the ATLAS DC2 production system. The term legacy refers to any non-grid system which may be deployed for use within DC2. The reasoning behind providing such a service for DC2 is twofold in nature. Firstly, the legacy interface provides a backup solution should unforeseen problems occur while developing the grid...
    Go to contribution page
  184. A. Kreymer (FERMILAB)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    The Fermilab CDF Run-II experiment is now providing official support for remote computing, expanding this to about 1/4 of the total CDF computing during the Summer of 2004. I will discuss in detail the extensions to CDF software distribution and configuration tools and procedures, in support of CDF GRID/DCAF computing for Summer 2004. We face the challenge of unreliable networks, time...
    Go to contribution page
  185. 29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    In the High Energy Physics (HEP) community, Grid technologies have been accepted as solutions to the distributed computing problem. Several Grid projects have provided software in the last years. Among of all them, the LCG - especially aimed at HEP applications - provides a set of services and respective client interfaces, both in the form of command line tools as well as programming...
    Go to contribution page
  186. P. Cerello (INFN Torino)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    Breast cancer screening programs require managing and accessing a huge amount of data, intrinsically distributed, as they are collected in different Hospitals. The development of an application based on Computer Assisted Detection algorithms for the analysis of digitised mammograms in a distributed environment is a typical GRID use case. In particular, AliEn (ALICE Environment)...
    Go to contribution page
  187. O. SMIRNOVA (Lund University, Sweden)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    The Nordic Grid facility (NorduGrid) came into production operation during the summer of 2002 when the Scandinavian Atlas HEP group started to use the Grid for the Atlas Data Challenges and was thus the first Grid ever contributing to an Atlas production. Since then, the Grid facility has been in continuous 24/7 operation offering an increasing number of resources to a growing set of...
    Go to contribution page
  188. E. Perez-Calle (CIEMAT)
    29/09/2004, 10:00
    Track 4 - Distributed Computing Services
    poster
    Expansion of large computing fabrics/clusters throughout the world would create a need for stricter security. Otherwise any system could suffer damages such as data loss, data falsification or misuse. Perimeter security and intrusion detection system (IDS) are the two main aspects that must be taken into account in order to achieve system security. The main target of an intrusion...
    Go to contribution page
  189. F. Furano (INFN Padova)
    29/09/2004, 10:00
    Track 5 - Distributed Computing Systems and Experiences
    poster
    This paper describes XTNetFile, the client side of a project conceived to address the high demand data access needs of modern physics experiments such as BaBar using the ROOT framework. In this context, a highly scalable and fault tolerant client/server architecture for data access has been designed and deployed which allows thousands of batch jobs and interactive sessions to...
    Go to contribution page
  190. Stan Williams (HP)
    29/09/2004, 11:00
    Plenary Sessions
    oral presentation
    Today's computers are roughly a factor of one billion less efficient at doing their job than the laws of fundamental physics state that they could be. How much of this efficiency gain will we actually be able to harvest? What are the biggest obstacles to achieving many orders of magnitude improvement in our computing hardware, rather that the roughly factor of two we are used to...
    Go to contribution page
  191. J. ROESE
    29/09/2004, 11:30
    Plenary Sessions
    oral presentation
    Today and in the future businesses need an intelligent network. And Enterasys has the smarter solution. Our active network uses a combination of context-based and embedded security technologies - as well as the industryโ€™s first automated response capability - so it can manage who is using your network. Our solution also protects the entire enterprise - from the edge, through the...
    Go to contribution page
  192. Dave McQueeney (IBM)
    29/09/2004, 12:00
    Plenary Sessions
    oral presentation
    The Global Technology Outlook (GTO) is IBM Researchโ€™s projection of the future for information technology (IT). The GTO identifies progress and trends in key indicators such as raw computing speed, bandwidth, storage, software technology, and business modeling. These new technologies have the potential to radically transform the performance and utility of tomorrow's information processing...
    Go to contribution page
  193. D. Smith (STANFORD LINEAR ACCELERATOR CENTER)
    29/09/2004, 14:00
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    for the BaBar Computing Group. The analysis of the BaBar experiment requires many times the measured data to be produced in simulation. This requirement has resulted in one of the largest distributed computing projects ever completed. The latest round of simulation for BaBar started in early 2003, and completed in early 2004, and encompassed over 1 million jobs, and over 2.2...
    Go to contribution page
  194. S. NAQVI (TELECOM PARIS)
    29/09/2004, 14:00
    Track 4 - Distributed Computing Services
    oral presentation
    In the evolution of computational grids, security threats were overlooked in the desire to implement a high performance distributed computational system. But now the growing size and profile of the grid require comprehensive security solutions as they are critical to the success of the endeavour. A comprehensive security system, capable of responding to any attack on grid resources, is...
    Go to contribution page
  195. Maria Girone
    29/09/2004, 14:00
    Track 4 - Distributed Computing Services
    oral presentation
    This presentation will summarise the deployment experience gained with POOL during the first larger LHC experiments data challenges performed. In particular we discuss the storage access performance and optimisations, the integration issues with grid middleware services such as the LCG Replica Location Service (RLS) and the LCG Replica Manager and experience with the POOL proposed...
    Go to contribution page
  196. R. Itoh (KEK)
    29/09/2004, 14:00
    Track 1 - Online Computing
    oral presentation
    A sizeable increase in the machine luminosity of KEKB accelerator is expected in coming years. This may result in a shortage in the data storage resource for the Belle experiment in the near future and it is desired to reduce the data flow as much as possible before writing the data to the storage device. For this purpose, a realtime event reconstruction farm has been installed in...
    Go to contribution page
  197. F. Gaede (DESY IT)
    29/09/2004, 14:00
    Track 3 - Core Software
    oral presentation
    LCIO is a persistency framework and data model for the next linear collider. Its original implementation, as presented at CHEP 2003, was focused on simulation studies. Since then the data model has been extended to also incorporate prototype test beam data, reconstruction and analysis. The design of the interface has also been simplified. LCIO defines a common abstract user...
    Go to contribution page
  198. T. Smith (CERN)
    29/09/2004, 14:00
    Track 6 - Computer Fabrics
    oral presentation
    This paper discusses the challenges in maintaining a stable Managed Storage Service for users built upon dynamic underlying disk and tape layers. Early in 2004 the tools and techniques used to manage disk, tape, and stage servers were refreshed in adopting the QUATTOR tool set. This has markedly increased the coherency and efficiency of the configuration of data servers. The LEMON...
    Go to contribution page
  199. Prof. A. Rimoldi (PAVIA UNIVERSITY & INFN)
    29/09/2004, 14:00
    Track 2 - Event processing
    oral presentation
    The simulation for the ATLAS experiment is presently operational in a full OO environment and it is presented here in terms of successful solutions to problems dealing with application in a wide community using a common framework. The ATLAS experiment is the perfect scenario where to test all applications able to satisfy the different needs of a big community. Following a well stated...
    Go to contribution page
  200. M. Stavrianakou (FNAL)
    29/09/2004, 14:20
    Track 2 - Event processing
    oral presentation
    The CMS detector simulation package, OSCAR, is based on the Geant4 simulation toolkit and the CMS object-oriented framework for simulation and reconstruction. Geant4 provides a rich set of physics processes describing in detail electro-magnetic and hadronic interactions. It also provides the tools for the implementation of the full CMS detector geometry and the interfaces required for...
    Go to contribution page
  201. E. Laure (CERN)
    29/09/2004, 14:20
    Track 4 - Distributed Computing Services
    oral presentation
    The aim of the EGEE (Enabling Grids for E-Science in Europe) is to create a reliable and dependable European Grid infrastructure for e-Science. The objective of the Middleware Re-engineering and Integration Research Activity is to provide robust middleware components, deployable on several platforms and operating systems, corresponding to the core Grid services for resource access, data...
    Go to contribution page
  202. D. Skow (FERMILAB)
    29/09/2004, 14:20
    Track 4 - Distributed Computing Services
    oral presentation
    There have been a number of efforts to develop use cases for the Grid to guide development and useability testing. This talk examines the value of "mis-use cases" for guiding the development of operational controls and error handling. A couple of the more common current network attack patterns will be extrapolated to a global Grid environment. The talk will walk through the various...
    Go to contribution page
  203. D. Duellmann (CERN IT/DB & LCG POOL PROJECT)
    29/09/2004, 14:20
    Track 3 - Core Software
    oral presentation
    The LCG POOL project is now entering the third year of active development. The basic functionality of the project is provided but some functional extensions will move into the POOL system this year. This presentation will give a summary of the main functionality provided by POOL, which used in physics productions today. We will then present the design and implementation of the main new...
    Go to contribution page
  204. 29/09/2004, 14:20
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The CMS 2004 Data Challenge (DC04) was devised to test several key aspects of the CMS Computing Model in three ways: by trying to sustain a 25 Hz reconstruction rate at the Tier-0; by distributing the reconstructed data to six Tier-1 Regional Centers (FNAL in US, FZK in Germany, Lyon in France, CNAF in Italy, PIC in Spain, RAL in UK) and handling catalogue issues; by redistributing...
    Go to contribution page
  205. M. Richter (Department of Physics and Technology, University of Bergen, Norway)
    29/09/2004, 14:20
    Track 1 - Online Computing
    oral presentation
    The ALICE experiment at LHC will implement a High Level Trigger System, where the information from all major detectors are combined, including the TPC, TRD, DIMUON, ITS etc. The largest computing challenge is imposed by the TPC, requiring realtime pattern recognition. The main task is to reconstruct the tracks in the TPC, and in a final stage combine the tracking information from all...
    Go to contribution page
  206. A. Moibenko (FERMI NATIONAL ACCELERATOR LABORATORY, USA)
    29/09/2004, 14:20
    Track 6 - Computer Fabrics
    oral presentation
    Fermilab has developed and successively uses Enstore Data Storage System. It is a primary data store for the Run II Collider Experiments, as well as for the others. It provides data storage in robotic tape libraries according to requirements of the experiments. High fault tolerance and availability, as well as multilevel priority based request processing allows experiments to effectively...
    Go to contribution page
  207. A. Klimentov (A)
    29/09/2004, 14:40
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    AMS-02 Computing and Ground Data Handling. V.Choutko (MIT, Cambridge), A.Klimentov (MIT, Cambridge) and M.Pohl (Geneva University) AMS (Alpha Magnetic Spectrometer) is an experiment to search in space for dark matter and antimatter on the International Space Station (ISS). The AMS detector had a precursor flight in 1998 (STS- 91, June 2-12, 1998)....
    Go to contribution page
  208. H. Meinhard (CERN-IT)
    29/09/2004, 14:40
    Track 6 - Computer Fabrics
    oral presentation
    By 2008, the T0/T1 centre for the LHC at CERN is estimated to use about 5000 TB of disk storage. This is a very significant increase over the about 250 TB running now. In order to be affordable, the chosen technology must provide the required performance and at the same time be cost-effective and easy to operate and use. We will present an analysis of the cost (both in terms of...
    Go to contribution page
  209. P. Sheldon (VANDERBILT UNIVERSITY)
    29/09/2004, 14:40
    Track 1 - Online Computing
    oral presentation
    The BTeV experiment, a proton/antiproton collider experiment at the Fermi National Accelerator Laboratory, will have a trigger that will perform complex computations (to reconstruct vertices, for example) on every collision (as opposed to the more traditional approach of employing a first level hardware based trigger). This trigger requires large-scale fault adaptive embedded software: ...
    Go to contribution page
  210. Giacomo Govi
    29/09/2004, 14:40
    Track 3 - Core Software
    oral presentation
    The POOL software package has been successfully integrated with the three large experiment software frameworks of ATLAS, CMS and LHCb. This presentation will summarise the experience gained during these integration efforts and will try to highlight the commonalities and the main differences between the integration approaches. In particular weโ€™ll discuss the role of the POOL object cache,...
    Go to contribution page
  211. C. Steenberg (California Institute of Technology)
    29/09/2004, 14:40
    Track 4 - Distributed Computing Services
    oral presentation
    Clarens enables distributed, secure and high-performance access to the worldwide data storage, compute, and information Grids being constructed in anticipation of the needs of the Large Hadron Collider at CERN. We report on the rapid progress in the development of a second server implementation in the Java language, the evolution of a peer-to-peer network of Clarens servers, and general...
    Go to contribution page
  212. A. Gheata (CERN)
    29/09/2004, 14:40
    Track 2 - Event processing
    oral presentation
    The current major detector simulation programs, i.e. GEANT3, GEANT4 and FLUKA have largely incompatible environments. This forces the physicists willing to make comparisons between the different transport Monte Carlos to develop entirely different programs. Moreover, migration from one program to the other is usually very expensive, in manpower and time, for an experiment offline...
    Go to contribution page
  213. M. Cardenas Montes (CIEMAT)
    29/09/2004, 14:40
    Track 4 - Distributed Computing Services
    oral presentation
    Implementing strategies for secured access to widely accessible clusters is a basic requirement of these services, in particular if GRID integration is sought for. This issue has two complementary lines to be considered: security perimeter and intrusion detection systems. In this paper we address aspects of the second one. Compared to classical intrusion detection mechanisms, close...
    Go to contribution page
  214. S. Wiesand (DESY)
    29/09/2004, 15:00
    Track 6 - Computer Fabrics
    oral presentation
    64-Bit commodity clusters and farms based on AMD technology meanwhile have been proven to achieve a high computing power in many scientific applications. This report first gives a short introduction into the specialties of the amd64 architecture and the characteristics of two-way Opteron systems. Then results from measuring the performance and the behavior of such systems in various...
    Go to contribution page
  215. R. Panse (KIRCHHOFF INSTITUTE FOR PHYSICS - UNIVERSITY OF HEIDELBERG)
    29/09/2004, 15:00
    Track 1 - Online Computing
    oral presentation
    Super-computers will be replaced more and more by PC cluster systems. Also future LHC experiments will use large PC clusters. These clusters will consist of off-the-shelf PCs, which in general are not built to run in a PC farm. Configuring, monitoring and controlling such clusters requires a serious amount of time consuming and administrative effort. We propose a cheap and easy...
    Go to contribution page
  216. A. Fanfani (INFN-BOLOGNA (ITALY))
    29/09/2004, 15:00
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    In March-April 2004 the CMS experiment undertook a Data Challenge(DC04). During the previous 8 months CMS undertook a large simulated event production. The goal of the challenge was to run CMS reconstruction for sustained period at 25Hz input rate, distribute the data to the CMS Tier-1 centers and analyze them at remote sites. Grid environments developed in Europe by the LHC...
    Go to contribution page
  217. Birger KOBLITZ (CERN)
    29/09/2004, 15:00
    Track 4 - Distributed Computing Services
    oral presentation
    The ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end...
    Go to contribution page
  218. M. POTEKHIN (BROOKHAVEN NATIONAL LABORATORY)
    29/09/2004, 15:00
    Track 2 - Event processing
    oral presentation
    The STAR Collaboration is currently using simulation software based on Geant 3. The emergence of the new Monte Carlo simulation packages, coupled with evolution of both STAR detector and its software, requires a drastic change of the simulation framework. We see the Virtual Monte Carlo (VMC) approach as providing a layer of abstraction that facilitates such transition. The VMC...
    Go to contribution page
  219. P. Canal (FERMILAB)
    29/09/2004, 15:00
    Track 3 - Core Software
    oral presentation
    Since version 3.05/02, the ROOT I/O System has gone through significant enhancements. In particular, the STL container I/O has been upgraded to support splitting, reading without existing libraries and using directly from TTreeFormula (TTree queries). This upgrade to the I/O system is such that it can be easily extended (even by the users) to support the splitting and querying of...
    Go to contribution page
  220. M. Branco (CERN)
    29/09/2004, 15:00
    Track 4 - Distributed Computing Services
    oral presentation
    In a resource-sharing environment on the grid both grid users and grid production managers call for security and data protection from unauthorized access. To secure data management several novel grid technologies were introduced in ATLAS data management. Our presentation will review new grid technologies introduced in HEP production environment for database access through the Grid...
    Go to contribution page
  221. S. Jarp (CERN)
    29/09/2004, 15:20
    Track 6 - Computer Fabrics
    oral presentation
    For the last 18 months CERN has collaborated closely with several industrial partners to evaluate, through the opencluster project, technology that may (and hopefully will) play a strong role in the future computing solutions, primarily for LHC but possibly also for other HEP computing environments. Unlike conventional field testing where solutions from industry are evaluated rather...
    Go to contribution page
  222. Rob KENNEDY (FNAL)
    29/09/2004, 15:20
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    Most of the simulated events for the DZero experiment at Fermilab have been historically produced by the โ€œremoteโ€ collaborating institutions. One of the principal challenges reported concerns the maintenance of the local software infrastructure, which is generally different from site to site. As the understanding of the community on distributed computing over distributively owned and...
    Go to contribution page
  223. F. Rademakers (CERN)
    29/09/2004, 15:20
    Track 4 - Distributed Computing Services
    oral presentation
    The ALICE experiment and the ROOT team have developed a Grid-enabled version of PROOF that allows efficient parallel processing of large and distributed data samples. This system has been integrated with the ALICE-developed AliEn middleware. Parallelism is implemented at the level of each local cluster for efficient processing and at the Grid level, for optimal workload management of...
    Go to contribution page
  224. A. McNab (UNIVERSITY OF MANCHESTER)
    29/09/2004, 15:20
    Track 4 - Distributed Computing Services
    oral presentation
    We describe the GridSite authorization system, developed by GridPP and the EU DataGrid project for access control in High Energy Physics grid environments with distributed virtual organizations. This system provides a general toolkit of common functions, including the evaluation of access policies (in GACL or XACML), the manipulation of digital credentials (X.509, GSI Proxies or VOMS...
    Go to contribution page
  225. A. Campbell (DESY)
    29/09/2004, 15:20
    Track 1 - Online Computing
    oral presentation
    We present the scheme in use for online high level filtering, event reconstruction and classification in the H1 experiment at HERA since 2001. The Data Flow framework ( presented at CHEP2001 ) will be reviewed. This is based on CORBA for all data transfer, multi-threaded C++ code to handle the data flow and synchronisation and fortran code for reconstruction and event selection. A...
    Go to contribution page
  226. O. van der Aa (INSTITUT DE PHYSIQUE NUCLEAIRE, UNIVERSITE CATHOLIQUE DE LOUVAIN)
    29/09/2004, 15:20
    Track 2 - Event processing
    oral presentation
    The observation of Higgs bosons predicted in supersymmetric theories will be a challenging task for the CMS experiment at the LHC, in particular for its High Level trigger (HLT). A prototype of the High Level Trigger software to be used in the filter farm of the CMS experiment and for the filtering of monte carlo samples will be presented. The implemented prototype heavily uses...
    Go to contribution page
  227. S. Linev (GSI)
    29/09/2004, 15:20
    Track 3 - Core Software
    oral presentation
    Till now, ROOT objects can be stored only in a binary ROOT specific file format. Without the ROOT environment the data stored in such files are not directly accessible. Storing objects in XML format makes it easy to view and edit (with some restriction) the object data directly. It is also plausible to use XML as exchange format with other applications. Therefore XML streaming has been...
    Go to contribution page
  228. Dr N. Konstantinidis (UNIVERSITY COLLEGE LONDON)
    29/09/2004, 15:40
    Track 2 - Event processing
    oral presentation
    We present a set of algorithms for fast pattern recognition and track reconstruction using 3D space points aimed for the High Level Triggers (HLT) of multi-collision hadron collider environments. At the LHC there are several interactions per bunch crossing separated along the beam direction, z. The strategy we follow is to (a) identify the z-position of the interesting interaction...
    Go to contribution page
  229. A. Heiss (FORSCHUNGSZENTRUM KARLSRUHE)
    29/09/2004, 15:40
    Track 6 - Computer Fabrics
    oral presentation
    Distributed physics analysis techniques as provided by the rootd and proofd concepts require a fast and efficient interconnect between the nodes. Apart from the required bandwidth the latency of message transfers is important, in particular in environments with many nodes. Ethernet is known to have large latencies, between 30 and 60 micro seconds for the common Giga-bit Ethernet. The...
    Go to contribution page
  230. T. Shears (University of Liverpool)
    29/09/2004, 15:40
    Track 1 - Online Computing
    oral presentation
    The Level 1 and High Level triggers for the LHCb experiment are software triggers which will be implemented on a farm of about 1800 CPUs, connected to the detector read-out system by a large Gigabit Ethernet LAN with a capacity of 8 Gigabyte/s and some 500 Gigabit Ethernet links. The architecture of the readout network must be designed to maximise data throughput, control data flow,...
    Go to contribution page
  231. T. Barrass (CMS, UNIVERSITY OF BRISTOL)
    29/09/2004, 15:40
    Track 4 - Distributed Computing Services
    oral presentation
    CMS currently uses a number of tools to transfer data which, taken together, form the basis of a heterogenous datagrid. The range of tools used, and the directed, rather than optimised nature of CMS recent large scale data challenge required the creation of a simple infrastructure that allowed a range of tools to operate in a complementary way. The system created comprises a...
    Go to contribution page
  232. A. Peters (ce)
    29/09/2004, 15:40
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    During the first half of 2004 the ALICE experiment has performed a large distributed computing exercise with two major objectives: to test the ALICE computing model, included distributed analysis, and to provide data sample for a refinement of the ALICE Jet physics Monte-Carlo studies. Simulation reconstruction and analysis of several hundred thousand events were performed, using the...
    Go to contribution page
  233. T. Johnson (SLAC)
    29/09/2004, 15:40
    Track 3 - Core Software
    oral presentation
    The FreeHEP Java library contains a complete implementation of Root IO for Java. The library uses the "Streamer Info" embedded in files created by Root 3.x to dynamically create high performance Java proxies for Root objects, making it possible to read any Root file, including files with user defined objects. In this presentation we will discuss the status of this code, explain its...
    Go to contribution page
  234. M. Crawford (FERMILAB)
    29/09/2004, 16:30
    Track 4 - Distributed Computing Services
    oral presentation
    As an underpinning of AFS and Windows 2000, and as a formally proven security protocol in its own right, Kerberos is ubiquitous among HEP sites. Fermilab and users from other sites have taken advantage of this and built a diversity of distributed applications over Kerberos v5. We present several projects in which this security infrastructure has been leveraged to meet the requirements of...
    Go to contribution page
  235. J-D. Durand (CERN)
    29/09/2004, 16:30
    Track 6 - Computer Fabrics
    oral presentation
    The Cern Advanced STORage (CASTOR) system is a scalable high throughput hierarchical storage system developed at CERN. CASTOR was first deployed for full production use in 2001 and has expanded to now manage around two PetaBytes and almost 20 million files. CASTOR is a modular system, providing a distributed disk cache, a stager, and a back end tape archive, accessible via a global...
    Go to contribution page
  236. C. Jones (CORNELL UNIVERSITY)
    29/09/2004, 16:30
    Track 3 - Core Software
    oral presentation
    HEP analysis is an iterative process. It is critical that in each iteration the physicist's analysis job accesses the same information as previous iterations (unless explicitly told to do otherwise). This becomes problematic after the data has been reconstructed several times. In addition, when starting a new analysis, physicists normally want to use the most recent version of...
    Go to contribution page
  237. 29/09/2004, 16:30
    Track 4 - Distributed Computing Services
    oral presentation
    SAM was developed as a data handling system for Run II at Fermilab. SAM is a collection of services, each described by metadata. The metadata are modeled on a relational database, and implemented in ORACLE. SAM, originally deployed in production for the D0 Run II experiment, has now been also deployed at CDF and is being commissioned at MINOS. This illustrates that the metadata...
    Go to contribution page
  238. Manuel Dias-Gomez (University of Geneva, Switzerland)
    29/09/2004, 16:30
    Track 2 - Event processing
    oral presentation
    The ATLAS experiment at the Large Hadron Collider (LHC) will face the challenge of efficiently selecting interesting candidate events in pp collisions at 14 TeV center- of-mass energy, whilst rejecting the enormous number of background events, stemming from an interaction rate of about 10^9 Hz. The Level-1 trigger will reduce the incoming rate to around O(100 kHz). Subsequently, the...
    Go to contribution page
  239. V. Gyurjyan (Jefferson Lab)
    29/09/2004, 16:30
    Track 1 - Online Computing
    oral presentation
    A general overview of the Jefferson Lab data acquisition run control system is presented. This run control system is designed to operate the configuration, control, and monitoring of all Jefferson Lab experiments. It controls data-taking activities by coordinating the operation of DAQ sub-systems, online software components and third-party software such as external slow control...
    Go to contribution page
  240. J. Closier (CERN)
    29/09/2004, 16:30
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The LHCb experiment performed its latest Data Challenge (DC) in May-July 2004. The main goal was to demonstrate the ability of the LHCb grid system to carry out massive production and efficient distributed analysis of the simulation data. The LHCb production system called DIRAC provided all the necessary services for the DC: Production and Bookkeeping Databases, File catalogs, Workload...
    Go to contribution page
  241. M. Mambelli (UNIVERSITY OF CHICAGO)
    29/09/2004, 16:50
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    We describe the design and operational experience of the ATLAS production system as implemented for execution on Grid3 resources. The execution environment consisted of a number of grid-based tools: Pacman for installation of VDT-based Grid3 services and ATLAS software releases, the Capone execution service built from the Chimera/Pegasus virtual data system for directed acyclic graph...
    Go to contribution page
  242. S. Albrand (LPSC)
    29/09/2004, 16:50
    Track 3 - Core Software
    oral presentation
    The ATLAS Metadata Interface (AMI) project provides a set of generic tools for managing database applications. AMI has a three-tier architecture with a core that supports a connection to any RDBMS using JDBC and SQL. The middle layer assumes that the databases have an AMI compliant self-describing structure. It provides a generic web interface and a generic command line interface. The...
    Go to contribution page
  243. G. GANIS (CERN)
    29/09/2004, 16:50
    Track 4 - Distributed Computing Services
    oral presentation
    The new authentication and security services available in the ROOT framework for client/server applications will be described. The authentication scheme has been designed with the purpose to make the system complete and flexible, to fit the needs of the coming clusters and facilities. Three authentication methods have been made available: Globus/GSI, for GRID-awareness; SSH, to allow...
    Go to contribution page
  244. P. Fuhrmann (DESY)
    29/09/2004, 16:50
    Track 6 - Computer Fabrics
    oral presentation
    The dCache software system has been designed to manage a huge amount of individual disk storage nodes and let them appear under a single file system root. Beside a variety of other features, it supports the GridFtp dialect, implements the Storage Resource Manager interface (SRM V1) and can be linked against the CERN GFAL software layer. These abilities makes dCache a perfect Storage...
    Go to contribution page
  245. Edward Moyse
    29/09/2004, 16:50
    Track 2 - Event processing
    oral presentation
    The event data model (EDM) of the ATLAS experiment is presented. For large collaborations like the ATLAS experiment common interfaces and data objects are a necessity to insure easy maintenance and coherence of the experiments software platform over a long period of time. The ATLAS EDM improves commonality across the detector subsystems and subgroups such as trigger, test beam...
    Go to contribution page
  246. E. Neilsen (FERMI NATIONAL ACCELERATOR LABORATORY)
    29/09/2004, 16:50
    Track 4 - Distributed Computing Services
    oral presentation
    The lattice gauge theory community produces large volumes of data. Because the data produced by completed computations form the basis for future work, the maintenance of archives of existing data and metadata describing the provenance, generation parameters, and derived characteristics of that data is essential not only as a reference, but also as a basis for future work. Development of...
    Go to contribution page
  247. F. Carena (CERN)
    29/09/2004, 16:50
    Track 1 - Online Computing
    oral presentation
    The Experiment Control System (ECS) is the top level of control of the ALICE experiment. Running an experiment implies performing a set of activities on the online systems that control the operation of the detectors. In ALICE, online systems are the Trigger, the Detector Control Systems (DCS), the Data-Acquisition System (DAQ) and the High-Level Trigger (HLT). The ECS provides a...
    Go to contribution page
  248. G. Carcassi (BROOKHAVEN NATIONAL LABORATORY)
    29/09/2004, 17:10
    Track 4 - Distributed Computing Services
    oral presentation
    We present a work-in-progress system, called GUMS, which automates the processes of Grid user registration and management and supports policy-aware authorization at well. GUMS builds on existing VO management tools (LDAP VO, VOMS and VOMRS) with a local grid user management system and a site database which stores user credentials, accounting history and policies in XML format. We use...
    Go to contribution page
  249. M. Case (UNIVERSITY OF CALIFORNIA, DAVIS)
    29/09/2004, 17:10
    Track 3 - Core Software
    oral presentation
    The CMS Detector Description Database (DDD) consists of a C++ API and an XML based detector description language. DDD is used by the CMS simulation (OSCAR), reconstruction (ORCA), and visualization (IGUANA) as well by test beam software that relies on those systems. The DDD is a sub-system within the COBRA framework of the CMS Core Software. Management of the XML is currently done using a...
    Go to contribution page
  250. D. Liko (CERN)
    29/09/2004, 17:10
    Track 1 - Online Computing
    oral presentation
    The unprecedented size and complexity of the ATLAS TDAQ system requires a comprehensive and flexible control system. Its role ranges from the so-called run-control, e.g. starting and stopping the datataking, to error handling and fault tolerance. It also includes intialisation and verification of the overall system. Following the traditional approach a hierachical system of...
    Go to contribution page
  251. Dr M. Steinke (Ruhr Universitaet Bochum)
    29/09/2004, 17:10
    Track 2 - Event processing
    oral presentation
    In the past year, BaBar has shifted from using Objectivity to using ROOT I/O as the basis for our primary event store. This shift required a total reworking of Kanga, our ROOT-based data storage format. We took advantage of this opportunity to ease the use of the data by supporting multiple access modes that make use of many of the analysis tools available in ROOT. Specifically, our...
    Go to contribution page
  252. Richard Mount (SLAC)
    29/09/2004, 17:10
    Track 4 - Distributed Computing Services
    oral presentation
  253. 29/09/2004, 17:10
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    This talk describes the various stages of ATLAS Data Challenge 2 (DC2) in what concerns usage of resources deployed via NorduGrid's Advanced Resource Connector (ARC). It also describes the integration of these resources with the ATLAS production system using the Dulcinea executor. ATLAS Data Challenge 2 (DC2), run in 2004, was designed to be a step forward in the distributed data...
    Go to contribution page
  254. T. Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY)
    29/09/2004, 17:10
    Track 6 - Computer Fabrics
    oral presentation
    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard allows independent institutions to implement their own SRMs, thus allowing for a uniform access to heterogeneous storage...
    Go to contribution page
  255. A. Amorim (FACULTY OF SCIENCES OF THE UNIVERSITY OF LISBON)
    29/09/2004, 17:30
    Track 3 - Core Software
    oral presentation
    The size and complexity of the present HEP experiments represents an enormous effort in the persistency of data. These efforts imply a tremendous investment in the databases field not only for the event data but also for data that is needed to qualify this one - the Conditions Data. In the present document we'll describe the strategy for addressing the Conditions data problem in the...
    Go to contribution page
  256. G. Watts (UNIVERSITY OF WASHINGTON)
    29/09/2004, 17:30
    Track 1 - Online Computing
    oral presentation
    The DZERO Collider Expermient logs many of its Data Aquisition Monitoring Information in long term storage. This information is most frequently used to understand shift history and efficiency. Approximately two kilobytes of information is stored every 15 second. We describe this system and the web interface provided. The current system is distributed, running on Linux for the back end...
    Go to contribution page
  257. Y. Iida (HIGH ENERGY ACCELERATOR RESEARCH ORGANIZATION)
    29/09/2004, 17:30
    Track 6 - Computer Fabrics
    oral presentation
    The Belle experiment has accumulated an integrated luminosity of more than 240fb-1 so far, and a daily logged luminosity now exceeds 800pb-1. These numbers correspond to more than 1PB of raw and processed data stored on tape and an accumulation of the raw data at the rate of 1TB/day. The processed, compactified data, together with Monte Carlo simulation data for the final physics...
    Go to contribution page
  258. Ian FISK (FNAL)
    29/09/2004, 17:30
    Track 4 - Distributed Computing Services
    oral presentation
    Current grid development projects are being designed such that they require end users to be authenticated under the auspices of a "recognized" organization, called a Virtual Organization (VO). A VO must establish resource-usage agreements with grid resource providers. The VO is responsible for authorizing its members for grid computing privileges. The individual sites and resources...
    Go to contribution page
  259. Dr S. Wynhoff (PRINCETON UNIVERSITY)
    29/09/2004, 17:30
    Track 2 - Event processing
    oral presentation
    We report on the software for Object-oriented Reconstruction for CMS Analysis, ORCA. It is based on the Coherent Object-oriented Base for Reconstruction, Analysis and simulation (COBRA) and used for digitization and reconstruction of simulated Monte-Carlo events as well as testbeam data. For the 2004 data challenge the functionality of the software has been extended to store...
    Go to contribution page
  260. I. Gaponenko (LAWRENCE BERKELEY NATIONAL LABORATORY)
    29/09/2004, 17:50
    Track 3 - Core Software
    oral presentation
    A new, completely redesigned Condition/DB was deployed in BaBar in October 2002. It replaced the old database software used through the first three and half years of data taking. The new software aims at performance and scalability limitations of the original database. However this major redesign brought in a new model of the metadata, brand new technology- and implementation-...
    Go to contribution page
  261. 29/09/2004, 17:50
    Track 4 - Distributed Computing Services
    oral presentation
    A key feature of Grid systems is the sharing of its resources among multiple Virtual Organizations (VOs). The sharing process needs a policy framework to manage the resource access and usage. Generally Policy frameworks exist for farms or local systems only, but now, for Grid environments, a general, and distributed policy system is necessary. Generally VOs and local systems have...
    Go to contribution page
  262. Dr J. Katzy (DESY, HAMBURG)
    29/09/2004, 17:50
    Track 2 - Event processing
    oral presentation
    During the years 2000 and 2001 the HERA machine and the H1 experiment performed substantial luminosity upgrades. To cope with the increased demands on data handling an effort was made to redesign and modernize the analysis software. Main goals were to lower turn-around time for physics analysis by providing a single framework for data storage, event selection, physics analysis and...
    Go to contribution page
  263. L. Magnoni (INFN-CNAF)
    29/09/2004, 17:50
    Track 6 - Computer Fabrics
    oral presentation
    Within a Grid the possibility of managing storage space is fundamental, in particular, before and during application execution. On the other hand, the increasing availability of highly performant computing resources raises the need for fast and efficient I/O operations and drives the development of parallel distributed file systems able to satisfy these needs granting access to distributed...
    Go to contribution page
  264. L. Abadie (CERN)
    29/09/2004, 17:50
    Track 1 - Online Computing
    oral presentation
    The aim of the LHCb configuration database is to store all the controllable devices of the detector. The experimentโ€™s control system (that uses PVSS) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to rapidly store and retrieve huge amounts...
    Go to contribution page
  265. T.M. Steinbeck (KIRCHHOFF INSTITUTE OF PHYSICS, RUPRECHT-KARLS-UNIVERSITY HEIDELBERG, for the Alice Collaboration)
    29/09/2004, 18:10
    Track 1 - Online Computing
    oral presentation
    The Alice High Level Trigger (HLT) cluster is foreseen to consist of 400 to 500 dual SMP PCs at the start-up of the experiment. The software running on these PCs will consist of components communicating via a defined interface, allowing flexible software configurations. During Alice's operation the HLT has to be continuously active to avoid detector dead time. To ensure that the...
    Go to contribution page
  266. C. Pruneau (WAYNE STATE UNIVERSITY)
    29/09/2004, 18:10
    Track 2 - Event processing
    oral presentation
    We present the design and performance analysis of a new event reconstruction chain deployed for analysis of STAR data acquired during the 2004 run and beyond. The creation of this new chain involved the elimination of obsolete FORTRAN components, and the development of equivalent or superior modules written in C++. The new reconstruction chain features a new and fast TPC cluster finder,...
    Go to contribution page
  267. A. Valassi (CERN)
    29/09/2004, 18:10
    Track 3 - Core Software
    oral presentation
    The Conditions Database project has been launched to implement a common persistency solution for experiment conditions data in the context of the LHC Computing Grid (LCG) Persistency Framework. Conditions data, such as calibration, alignment or slow control data, are non-event experiment data characterized by the fact that they vary in time and may have different versions. The LCG...
    Go to contribution page
  268. S. Veseli (Fermilab)
    29/09/2004, 18:10
    Track 6 - Computer Fabrics
    oral presentation
    The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes...
    Go to contribution page
  269. M. Paterno (FERMILAB)
    30/09/2004, 08:30
    Plenary Sessions
    oral presentation
    As Fermilab's representatives to the C++ standardization effort, we have been promoting directions of special interest to the physics community. We here report on selected recent developments toward the next revision of the C++ Standard. Topics will include standardization of random number and special function libraries, as well as core language issues promoting improved run-time...
    Go to contribution page
  270. Fabiola Gianotti (CERN)
    30/09/2004, 09:00
    Plenary Sessions
    oral presentation
    The LHC Software will be confronted to unprecedented challenges as soon as the LHC will turn on. We summarize the main Software requirements coming from the LHC detectors, triggers and physics, and we discuss several examples of Software components developed by the experiments and the LCG project (simulation, reconstruction, etc.), their validation, and their adequacy for LHC physics.
    Go to contribution page
  271. David Stickland (CERN)
    30/09/2004, 09:30
    Plenary Sessions
    oral presentation
    The LHC experiments are undertaking various data-challenges in the run-up to completion of their computing models and the submission of the experiment and of the LHC Computing Grid (LCG), Technical Design Reports(TDR) in 2005. In this talk we summarize the current working models of the LHC Computing Models, identifying their similarities and differences. We summarize the results and...
    Go to contribution page
  272. A. CERVERA VILLANUEVA (University of Geneva)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    We have developed a c++ software package, called "RecPack", which allows the reconstruction of dynamic trajectories in any experimental setup. The basic utility of the package is the fitting of trajectories in the presence of random and systematic perturbations to the system (multiple scattering, energy loss, inhomogeneous magnetic fields, etc) via a Kalman Filter fit. It also...
    Go to contribution page
  273. 30/09/2004, 10:00
    Track 3 - Core Software
    poster
    Building a state of the art high energy physics detector like CMS requires strict interoperability and coherency in the design and construction of all sub-systems comprising the detector. This issue is especially critical for the many database components that are planned for storage of the various categories of data related to the construction, operation, and maintainance of the...
    Go to contribution page
  274. F. Gray (UNIVERSITY OF CALIFORNIA, BERKELEY)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    The muCap experiment at the Paul Scherrer Institut (PSI) will measure the rate of muon capture on the proton to a precision of 1% by comparing the apparent lifetimes of positive and negative muons in hydrogen. This rate may be related to the induced pseudoscalar weak form factor of the proton. Superficially, the muCap apparatus looks something like a miniature model of a collider...
    Go to contribution page
  275. A. Di Meglio (CERN)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    Software Configuration Management (SCM) Patterns and the Continuous Integration method are recent and powerful techniques to enforce a common software engineering process across large, heterogeneous, rapidly changing development projects where a rapid release lifecycle is required. In particular the Continuous Integration method allows tracking and addressing problems in the...
    Go to contribution page
  276. W. Waltenberger (HEPHY VIENNA)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    State of the art in the field of fitting particle tracks to one vertex is the Kalman technique. This least-squares (LS) estimator is known to be ideal in the case of perfect assignment of tracks to vertices and perfectly known Gaussian errors. Experimental data and detailed simulations always depart from this perfect model. The imperfections can be expected to be larger in high...
    Go to contribution page
  277. 30/09/2004, 10:00
    Track 3 - Core Software
    poster
    In addition to the well-known challenges of computing and data handling at LHC scales, LHC experiments have also approached the scalability limit of manual management and control of the steering parameters ("primary numbers") provided to their software systems. The laborious task of detector description benefits from the implementation of a scalable relational database approach. We...
    Go to contribution page
  278. A. Undrus (BROOKHAVEN NATIONAL LABORATORY, USA)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    Software testing is a difficult, time-consuming process that requires technical sophistication and proper planning. This is especially true for the large-scale software projects of High Energy Physics where constant modifications and enhancements are typical. The automated nightly testing is the important component of NICOS, NIghtly COntrol System, that manages the multi-platform nightly...
    Go to contribution page
  279. M. Stoufer (LAWRENCE BERKELEY NATIONAL LAB)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    As any software project grows in both its collaborative and mixed codebase nature, current tools like CVS and Maven start to sag under the pressure of complex sub-project dependencies and versioning. A developer-wide failure in mastery of these tools will inevitably lead to an unrecoverable instability of a project. Even keeping a single software project stable in a large collaborative...
    Go to contribution page
  280. Mr V. Onuchin (CERN, IHEP)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    Carrot is a scripting module for the Apache webserver. Based on the ROOT framework, it has a number of powerful features, including the ability to embed C++ code into HTML pages, run interpreted and compiled C++ macros, send and execute C++ code on remote web servers, browse and analyse the remote data located in ROOT files with the web browser, access and manipulate databases, and...
    Go to contribution page
  281. A. Zaytsev (BUDKER INSTITUTE OF NUCLEAR PHYSICS)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    CMD-3 is the general purpose cryogenic magnetic detector for VEPP-2000 electron-positron collider, which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). The main aspects of physical program of the experiment are study of known and search for new vector mesons, study of the ppbar a nnbar production cross sections in the vicinity of the threshold and...
    Go to contribution page
  282. K. Rabbertz (UNIVERSITY OF KARLSRUHE)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    For data analysis in an international collaboration it is important to have an efficient procedure to distribute, install and update the centrally maintained software. This is even more true when not only locally but also grid accessible resources are to be exploited. A practical solution will be presented that has been successfully employed for CMS software installations on systems...
    Go to contribution page
  283. M.S. Mennea (UNIVERSITY & INFN BARI)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this subdetector (more than 50 millions channels organized in 17000 modules each one of these being a complete detector), the standard CMS visualisation tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and...
    Go to contribution page
  284. M.G. Pia (INFN GENOVA)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    A Toolkit for Statistical Data Analysis has been recently released. Thanks to this novel software system, for the first time an ample set of sophisticated algorithms for the comparison of data distributions (goodness of fit tests) is made available to the High Energy Physics community in an open source product. The statistical algorithms implemented belong to two sets, for the...
    Go to contribution page
  285. D. KLOSE (Universidade de Lisboa, Portugal)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    Conditions Databases are beginning to be widely used in the ATLAS experiment. Conditions data are time-varying data describing the state of the detector used to reconstruct the event data. This includes all sorts of slowly evolving data like detector alignment, calibration, monitoring and data from Detector Control System (DCS). In this paper we'll present the interfaces between the...
    Go to contribution page
  286. Mr W. Waltenberger (Austrian Academy of Sciences // Institute of High Energy Physics)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    A proposal is made for the design and implementation of a detector-independent vertex reconstruction toolkit and interface to generic objects (VERTIGO). The first stage aims at re- using existing state-of-the-art algorithms for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Prototype candidates for the latter are a wide range of...
    Go to contribution page
  287. Dr E. Chabanat (IN2P3)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    CMS and others LHC experiments offer a new challenge for the vertex reconstruction: the elaboration of efficient algorithms at high-luminosity beam collisions. We present here a new algorithm in the vertex finding field : Deterministic Annealing (DA). This algorithm comes from information theory by analogy to statistical physics and has already been used in clustering and classification...
    Go to contribution page
  288. 30/09/2004, 10:00
    Track 2 - Event processing
    poster
    A simultaneous track finding / fitting procedure based on Kalman filtering approach has been developed for the forward muon spectrometer of ALICE experiment. In order to improve the performance of the method in high-background conditions of the heavy ion collisions the "canonical" Kalman filter has been modified and supplemented by a "smoother" part. It is shown that the resulting...
    Go to contribution page
  289. V M. Moreira do Amaral (UNIVERSITY OF MANNHEIM)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    There is a permanent quest for user friendliness in HEP Analysis. This growing need is directly proportional to the analysis frameworks' interface complexity. In fact, the user is provided with an analysis framework that makes use of a General Purpose Language to program the query algorithms. Usually the user finds this overwhelming, since he or she is presented with the complexity of...
    Go to contribution page
  290. Dr S. Pardi (DIPARTIMENTO DI MATEMATICA ED APPLICAZIONI "R.CACCIOPPOLI")
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The algorithms for the detection of gravitational waves are usually very complex due to the low signal to noise ratio. In particular the search for signals coming from coalescing binary systems can be very demanding in terms of computing power, like in the case of the classical Standard Matched Filter Technique. To overcome this problem, we tested a Dynamic Matched Filter Technique,...
    Go to contribution page
  291. M.G. Pia (INFN GENOVA)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    The adoption of a rigorous software process is well known to represent a key factor for the quality of the software product and the most effective usage of the human resources available to a software project. The Unified Process, in particular its commercial packaging known as the RUP (Rational Unified Process) has been one of the most widely used software process models in the...
    Go to contribution page
  292. B. White (STANFORD LINEAR ACCELERATOR CENTER (SLAC))
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The Electron Gamma Shower (EGS) Code System at SLAC is designed to simulate the flow of electrons, positrons and photons through matter at a wide range of energies. It has a large user base among the high-energy physics community and is often used as a teaching tool through a Web interface that allows program input and output. Our work aims to improve the user interaction and shower...
    Go to contribution page
  293. S. Guatelli (INFN Genova, Italy)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The study of the effects of space radiation on astronauts in an important concern of space missions for the exploration of the Solar System. The radiation hazard to crew is critical to the feasibility of interplanetary manned missions. To protect the crew, shielding must be designed, the environment must be anticipated and monitored, and a warning system must be put in place. A...
    Go to contribution page
  294. J. Hrivnac (LAL)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    GraXML is the framework for manipulation and visualization of 3D geometrical objects in space. The full framework consists of the GraXML toolkit, libraries implementing Generic and Geometric Models and end-user interactive front-ends. GraXML Toolkit provides a foundation for operations on 3D objects (both detector elements and events). Each external source of 3D data is...
    Go to contribution page
  295. A. Valassi (CERN)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    The migration of the Harp data and software from an Objectivity- based to an Oracle-based data storage solution is reviewed in this presentation. The project, which was successfully completed in January 2004, involved three distinct phases. In the first phase, which profited significantly from the previous COMPASS data migration project, 30 TB of Harp raw event data were migrated in...
    Go to contribution page
  296. T. Baron (CERN)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    CHEP 2004 conference is using the Integrated Digital Conferencing product to manage part of its web site and processes to run the conference. This software has been built in the framework of InDiCo European Project. It is designed to be generic and extensible with the goal of providing help for single seminars as well as large conferences management. Partly developped at CERN within...
    Go to contribution page
  297. E. Poinsignon (CERN)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    The External Software Service of the LCG SPI project provides open source and public domain packages required by the LCG projects and experiments. Presently, more than 50 libraries and tools are provided for a set of platforms decided by the architect forum. All packages are installed following a standard procedure and are documented on the web. A set of scripts has been developed...
    Go to contribution page
  298. M. Stavrianakou (FNAL)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The CMS Geant4-based Simulation Framework, Mantis, is a specialization of the COBRA framework, which implements the CMS OO architecture. Mantis, which is the basis for the CMS-specific simulation program OSCAR, provides the infrastructure for the selection, configuration and tuning of all essential simulation elements: geometry construction, sensitive detector and magnetic field...
    Go to contribution page
  299. N. Graf (SLAC)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    We discuss techniques used to access legacy event generators from modern simulation environments. Examples will be given of our experience within the linear collider community accessing various FORTRAN-based generators from within a Java environment. Coding to a standard interface and use of shared object libraries enables runtime selection of generators, and allows for extension of...
    Go to contribution page
  300. Dr M. Biglietti (UNIVERSITY OF MICHIGAN)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    At LHC the 40 MHz bunch crossing rate dictates a high selectivity of the ATLAS Trigger system, which has to keep the full physics potential of the experiment in spite of a limited storage capability. The level-1 trigger, implemented in a custom hardware, will reduce the initial rate to 75 kHz and is followed by the software based level-2 and Event Filter, usually referred as High Level...
    Go to contribution page
  301. A. Schmidt (Institut fuer Experimentelle Kernphysik, Karlsruhe University, Germany)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    At CHEP03 we introduced "Physics Analysis eXpert" (PAX), a C++ toolkit for advanced physics analyses in High Energy Physics (HEP) experiments. PAX introduces a new level of abstraction beyond detector reconstruction and provides a general, persistent container model for HEP events. Physics objects like fourvectors, vertices and collisions can easiliy be stored, accessed and manipulated....
    Go to contribution page
  302. O. Link (CERN, PH/SFT)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    Twisted trapezoids are important compontents in the LAr end cap calorimeter of the Atlas detector. A similar solid, the so-called twisted tubs consists of two end planes, inner and outer hyperboloidal surfaces, and twisted surfaces, and is an indispensable component for cylindrical drift chambers (see K. Hoshina et al, Computer Physics Communications 153 (2003) 373-391). In Geant3...
    Go to contribution page
  303. G B. Barrand (CNRS / IN2P3 / LAL)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    OpenPAW is for people that definitively do not want to quit the PAW command prompt, but seek anyway an implementation based over more modern technologies. We shall present the OpenScientist/Lab/opaw program that offers a PAW command prompt by using the OpenScientist tools (then C++, Inventor for doing graphic, Rio for doing the IO, OnX for the GUI, etc...). The OpenScientist/Lab...
    Go to contribution page
  304. S. Schmid (ETH Zurich)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    LHC experiments have large amounts of software to build. CMS has studied ways to shorten project build times using parallel and distributed builds as well as improved ways to decide what to rebuild. We have experimented with making idle desktop and server machines easily available as a virtual build cluster using distcc and zeroconf. We have also tested variations of ccache and more...
    Go to contribution page
  305. C. Jones (CORNELL UNIVERSITY)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    A common task for a reconstruction/analysis system is to be able to output different sets of events to different permanent data stores (e.g. files). This allows multiple related logical jobs to be grouped into one process and run using the same input data (read from a permanent data store and/or created from an algorithm). In our system, physicists can specify multiple output 'paths',...
    Go to contribution page
  306. J. Hrivnac (LAL)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    There are two kinds of analysis objects with respect to their persistent requirements: * Objects, which need direct access to the persistency service only for their IO operations (read/write/update/...): histograms, clouds, profiles, ... All Persistency requirements for those objects can be implemented by standard Transient-Persistent Separation techniques like JDO, Serialization,...
    Go to contribution page
  307. Dr S. Cucciarelli (CERN)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The Pixel Detector is the innermost one in the tracking system of the Compact Muon Solenoid (CMS) experiment. It provides the most precise measurements not only supporting the full track reconstruction but also allowing the standalone reconstruction useful especially for the online event selection at High-Level Trigger (HLT). The performance of the Pixel Detector is given. The HLT...
    Go to contribution page
  308. V. Kuznetsov (CORNELL UNIVERSITY)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    Linux operating system has become the platform of choice in the HEP community. However, the migration process from another operating system to Linux can be a tremendous effort for developers and system administrators. The ultimate goal of such a transition is to maximize agreement between the final results of identical calculations on the different platforms. Apart from the fine tuning of...
    Go to contribution page
  309. I. Reguero (CERN, IT DEPARTMENT), J A. Lopez-Perez (CERN, IT DEPARTMENT)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    Our goal is two fold. On one hand we wanted to address the interest of CMS users to have LCG Physics analysis environment on Solaris. On the other hand we wanted to assess the difficulty of porting code written in Linux without particular attention to portability to other Unix implementations. Our initial assumption was that the difficulty would be manageable even for a very small team....
    Go to contribution page
  310. M.G. Pia (INFN GENOVA)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The Geant4 Toolkit provides an ample set of alternative and complementary physics models to handle the electromagnetic interactions of leptons, photons, charged hadrons and ions. Because of the critical role often played by simulation in the experimental design and physics analysis, an accurate validation of the physics models implemented in Geant4 is essential, down to the...
    Go to contribution page
  311. W. Lavrijsen (LBNL)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality....
    Go to contribution page
  312. V. Onuchin (CERN, IHEP)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    The RDBC (ROOT DataBase Connectivity) library is a C++ implementation of the The Java Database Connectivity Application Programming Interface. It provides a DBMS-independent interface to relational databases from ROOT as well as a generic SQL database access framework. RDBC also extends the ROOT TSQL abstract interface. Currently it is used in two large experiments: - in Minos as...
    Go to contribution page
  313. C. ARNAULT (CNRS)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    Since its introduction in 1999, CMT is now used as a production tool in many large software projects for physics research (ATLAS, LHCb, Virgo, Auger, Planck). Although its basic concepts remain unchanged since the beginning, proving their viability, it is still improving and increasing its coverage of the configuration management mechanisms. Two important evolutions have recently been...
    Go to contribution page
  314. G B. Barrand (CNRS / IN2P3 / LAL)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    Rio (for ROOT IO) is a rewriting of the file IO system of ROOT. We shall present our strong motivations of doing this tedious work. We shall present the main choices done in the Rio implementation (then by opposition to what we don't like in ROOT). For example, we shall say why we believe that an IO package is not a drawing package (no TClass::Draw) ; why someone should use...
    Go to contribution page
  315. 30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The ROOT geometry package is a tool designed for building, browsing, tracking and visualizing a detector geometry. The code is independent from other external MC for simulation, therefore it does not contain any constraints related to physics. However, the package defines a number of hooks for tracking, such as media, materials, magnetic field or track state flags, in order to allow...
    Go to contribution page
  316. I. Antcheva (CERN)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    The GUI is a very important component of the ROOT framework. Its main purpose is to improve the usability and end-user perception. In this paper, we present two main projects in this direction: the ROOT graphics editor and the ROOT GUI builder. The ROOT graphics editor is a recent addition to the framework. It provides a state of the art and an intuitive way to create or edit objects...
    Go to contribution page
  317. P. Nevski (BROOKHAVEN NATIONAL LABORATORY)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The ATLAS detector is a sophisticated multi-purpose detector with over 10 million electronics channels designed to study high-pT physics at LHC. Due to their high multiplicity, reaching almost hundred thousand particles per event, heavy ion collisions pose a formidable computational challenge. A set of tools have been created to realistically simulate and fully reconstruct the most...
    Go to contribution page
  318. E. Tcherniaev (CERN)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    This paper discusses some key points in the organization of the HARP software. In particular it describes the configuration of the packages, data and code management, testing and release procedures. Development of the HARP software is based on incremental releases with strict respect of the design structure. This poses serious challenges to the software management, which has gone...
    Go to contribution page
  319. C. Jones (CORNELL UNIVERSITY)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    Generic programming as exemplified by the C++ standard library makes use of functions or function objects (objects that accept function syntax) to specialize generic algorithms for particular uses. Such separation improves code reuse without sacrificing efficiency. We employed this same technique in our combinatoric engine: DChain. In DChain, physicists combine lists of child particles...
    Go to contribution page
  320. A. Pfeiffer (CERN, PH/SFT)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    In the context of the LHC Computing Grid (LCG) project, the Applications Area develops and maintains that part of the physics applications software and associated infrastructure that is shared among the LHC experiments. The Physicist Interface (PI) project of the LCG Application Area encompasses the interfaces and tools by which physicists will directly use the software. In...
    Go to contribution page
  321. Mei Ye
    30/09/2004, 10:00
    Track 1 - Online Computing
    poster
    This article describes the simulation of the read-out subsystem which will be subject to the BESIII data acquisition system. According to the purpose of the BESIII, the event rate will be about 4000Hz, and the data rate up to 50Mbytes/sec after Level 1 trigger. The read-out subsystem consists of some read-out crates and read-out computer whose principle function is to collect event...
    Go to contribution page
  322. Dr G. Folger (CERN)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    Geant4 is a toolkit for the simulation of the passage of particles through matter. Amongst its applications are hadronic calorimeters of LHC detectors and simulation of radiation environments. For these types of simulation, a good description of secondaries generated by inelastic interactions of primary nucleons and pions is particularly important. The Geant4 Binary Cascade is a...
    Go to contribution page
  323. Dr M. Whalley (IPPP, UNIVERSITY OF DURHAM)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    We will describe the plans and objectives of the recently funded PPARC(UK) e-science project, the Combined E-Science Data Analysis Resource for High Energy Physics (CEDAR), which will combine the strengths of the well established and widely used HEPDATA library of HEP data and the innovative JETWEB Data/Monte Carlo comparison facility built on the HZTOOL package and which exploits...
    Go to contribution page
  324. Vakhtang tsulaia
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The ATLAS Detector consists of several major subsytems: an inner detector composed of pixels, microstrip detectors and a transition radiation tracker; electromagnetic and hadronic calorimetry, and a muon spectrometer. Over the last year, these systems have been described in terms of a set of geometrical primitives known as GeoModel. Software components for detector description interpret...
    Go to contribution page
  325. Dr V. Tioukov (INFN NAPOLI)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    OPERA is a massive lead/emulsion target for a long-baseline neutrino oscillation search. More then 90% of the useful experimental data in OPERA will be produced by the scanning of emulsion plates with the automatic microscopes. The main goal of the data processing in OPERA will be the search, analysis and identification of primary and secondary vertexes produced by neutrino in...
    Go to contribution page
  326. L. Nellen (I. DE CIENCIAS NUCLEARES, UNAM)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    The Pierre Auger Observatory consists of two sites with several semi-autonomous detection systems. Each component, and in some cases each event, provides a preferred coordinate system for simulation and analysis. To avoid a proliferation of coordinate systems in the offline software of the Pierre Auger Observatory, we have developed a geometry package that allows the treatment of...
    Go to contribution page
  327. Y. Perrin (CERN)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    A web portal has been developed, in the context of the LCG/SPI project, in order to coordinate workflow and manage information in large software projects. It is a development of the GNU Savannah package and offers a range of services to every hosted project: Bug / support / patch trackers, a simple task planning system, news threads, and a download area for software releases. Features...
    Go to contribution page
  328. R. Brun (CERN)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    The ROOT linear algebra package has been invigorated . The hierarchical structure has been improved allowing different flavors of matrices, like dense and symmetric . A fairly complete set of matrix decompositions has been added to support matrix inversions and solving linear equations. The package has been extensively compared to other algorithms for its accuracy and...
    Go to contribution page
  329. S. Albrand (LPSC)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    The Tag Collector is a web interfaced database application for release management. The tool is tightly coupled to CVS, and also to CMT, the configuration management tool. Developers can interactively select the CVS tags to be included in a build, and the complete build commands are produced automatically. Other features are provided such as verification of package CMT requirements files,...
    Go to contribution page
  330. A. Salzburger (UNIVERSITY OF INNSBRUCK)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The ATLAS reconstruction software requires extrapolation to arbitrary oriented surfaces of different types inside a non-uniform magnetic field. In addition multiple scattering and energy loss effects along the propagated trajectories have to be taken into account. A good performace in respect of computing time consumption is crucial due to hit and track multiplicity in high luminosity...
    Go to contribution page
  331. D. Klose (Universidade de Lisboa, Portugal)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    A common LCG architecture for the Conditions Database for the time evolving data enables the possibility to separate the interval-of- validity (IOV) information from the conditions data payload. The two approaches can be beneficial in different cases and separation presents challenges for efficient knowledge discovery, navigation and data visualization. In our paper we describe the...
    Go to contribution page
  332. Dimitri gladkov
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The design, implementation and performance of the ZEUS Global Tracking Trigger (GTT) Forward Algorithm is described. The ZEUS GTT Forward Algorithm integrates track information from the ZEUS Micro Vertex Detector (MVD) and forward Straw Tube Tracker (STT) to provide a picture of the event topology in the forward direction ($1.5<\eta <3$ ) of the ZEUS detector. This region is...
    Go to contribution page
  333. Dr E. Gerchtein (CMU)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    Long lived charged hyperon, $\Xi$ and $\Omega$, are capable of travelling significant distances producing hits in the silicon detector, before decaying into $\Lambda^0 \pi$ and $\Lambda^0 K$ pairs, respectively. This gives unique opportunity of reconstructiong hyperon tracks. We have developed a dedicated "outside-in" tracking algorithm that is seeded by 4-momentum and decay vertex...
    Go to contribution page
  334. C. Leggett (LAWRENCE BERKELEY NATIONAL LABORATORY)
    30/09/2004, 10:00
    Track 3 - Core Software
    poster
    It is essential to provide users transparent access to time varying data, such as detector misalignments, calibration parameters and the like. This data should be automatically updated, without user intervention, whenever it changes. Furthermore, the user should be able to be notified whenever a particular datum is updated, so as to perform actions such as re-caching of compound results,...
    Go to contribution page
  335. 30/09/2004, 10:00
    Track 2 - Event processing
    poster
    Validation of hadronic physics processes of the Geant4 simulation toolkit is a very important task to ensure adequate physics results for the experiments being built at the Large Hadron Collider. We report on simulation results obtained using the Geant4 Bertini cascade double-differential production cross-sections for various target materials and incident hadron kinetic energies between...
    Go to contribution page
  336. Mr A. Kulikov (Joint Institute for Nuclear Research, Dubna, Russia.)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    Using the modern 3D visualization software and hardware to represent the object models of the HEP detectors would create the impressive pictures of events and the detail views of the detectors facilitating the design, simulation and data analysis and representation the huge amount of the information flooding the modern HEP experiments. In this paper we represent the work made by members...
    Go to contribution page
  337. T. Todorov (CERN/IReS)
    30/09/2004, 10:00
    Track 2 - Event processing
    poster
    The simulation, reconstruction and analysis software access to the magnetic field has large impact both on CPU performance and on accuracy. An approach based on a volume geometry is described. The volumes are constructed in such a way that their boundaries correspond to field discontinuities, which are due to changes in magnetic permeability of the materials. The field in each...
    Go to contribution page
  338. Bo Anders Ynnerman (Linkรถping)
    30/09/2004, 11:00
    Plenary Sessions
    oral presentation
    This talk gives a brief overview of recent development of high performance computing and Grid initiatives in the Nordic region. Emphasis will be placed on the technology and policy demands posed by the integration of general purpose supercomputing centers into Grid environments. Some of the early experiences of bridging national eBorders in the Nordic region will also be...
    Go to contribution page
  339. Peter Clarke
    30/09/2004, 11:30
    Plenary Sessions
    oral presentation
    The global network is more than ever taking its role as the great "enabler" for many branches of science and research. Foremost amongst such science drivers is of course the LHC/LCG programme, although there are several other sectors with growing demands of the network. Common to all of these is the realisation that a straightforward over provisioned best efforts wide area IP...
    Go to contribution page
  340. F. Fluckiger (CERN)
    30/09/2004, 12:00
    Plenary Sessions
    oral presentation
    The Architectural Principles of the Internet have dominated the past decade. Orthogonal to the telecommunications industry principles, they dramatically changed the networking landscape because they relied on iconoclastic ideas. First, the Internet end-to-end principle, which stipulates that the network should intervene minimally on the end-to-end traffic, pushing the complexity to the...
    Go to contribution page
  341. B. White (SLAC)
    30/09/2004, 14:00
    During a recent visit to SLAC, Tim Berners-Lee challenged the High Energy Physics community to identify and implement HEP resources to which Semantic Web technologies could be applied. This challenge comes at a time when a number of other scientific disciplines (for example, bioinformatics and chemistry) have taken a strong initiative in making information resources compatible with...
    Go to contribution page
  342. I. Antcheva (CERN)
    30/09/2004, 14:00
    Track 3 - Core Software
    oral presentation
    Designing a usable, visually-attractive GUI is somewhat more difficult than it appears at a first glance. The users, the GUI designers and the programmers are three important parts involved in this process and everyone has a comprehensive view on the aspects of the application goals, as well as the steps that have to be taken to meet successfully the application requirements. The...
    Go to contribution page
  343. 30/09/2004, 14:00
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    Project SETI@HOME has proven to be one of the biggest successes of distributed computing during the last years. With a quite simple approach SETI manages to process huge amounts of data using a vast amount of distributed computer power. To extend the generic usage of these kinds of distributed computing tools, BOINC (Berkeley Open Infrastructure for Network Computing) is being...
    Go to contribution page
  344. N. Neumeister (CERN / HEPHY VIENNA)
    30/09/2004, 14:00
    Track 2 - Event processing
    oral presentation
    The CMS detector has a sophisticated four-station muon system made up of tracking chambers (Drift Tubes, Cathode Strip Chambers) and dedicated trigger chambers. A muon reconstruction software based on Kalman filter techniques has been developed which reconstructs muons in the standalone muon system, using information from all three types of muon detectors, and links the resulting muon...
    Go to contribution page
  345. H. Newman (Caltech)
    30/09/2004, 14:00
    Track 7 - Wide Area Networking
    oral presentation
    Wide area networks of sufficient, and rapidly increasing end-to-end capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of more than 1000 over the past decade, and the outlook is for a similar increase over the next...
    Go to contribution page
  346. 30/09/2004, 14:00
    Track 4 - Distributed Computing Services
    oral presentation
    The ATLAS experiment uses a tiered data Grid architecture that enables possibly overlapping subsets, or replicas, of the original set to be located across the ATLAS collaboration. The full set of experiment data is located at a single Tier 0 site, and then subsets of the data are located at national Tier 1 sites, smaller subsets at smaller regional Tier 2 sites, and so on. In order to...
    Go to contribution page
  347. I. Hrivnacova (IPN, ORSAY, FRANCE)
    30/09/2004, 14:00
    Track 2 - Event processing
    oral presentation
    In order for physicist to easily benefit from the different existing geometry tools used within the community, the Virtual Geometry Model (VGM) has been designed. In the VGM we introduce the abstract interfaces to geometry objects and an abstract factory for geometry construction, import and export. The interfaces to geometry objects were defined to be suitable to describe "geant-like"...
    Go to contribution page
  348. G. Lo Re (INFN & CNAF Bologna)
    30/09/2004, 14:20
    Track 7 - Wide Area Networking
    oral presentation
    Next generation high energy physics experiments planned at the CERN Large Hadron Collider is so demanding in terms of both computing power and mass storage that data and CPU's can not be concentrated in a single site and will be distributed on a computational Grid according to a "multi-tier". LHC experiments are made of several thousands of people from a few hundreds of institutes...
    Go to contribution page
  349. P E. Tissot-Daguette (CERN)
    30/09/2004, 14:20
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The AliEn system, an implementation of the Grid paradigm developed by the ALICE Offline Project, is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The AliEn Web Portal is built around Open Source components with a backend based on Grid Services and compliant with the OGSA model. An easy and intuitive presentation layer gives the...
    Go to contribution page
  350. J. Hrivnac (LAL)
    30/09/2004, 14:20
    Track 3 - Core Software
    oral presentation
    Aspect-Oriented Programming (AOP) is a new paradigm promising to allow further modularization of large software frameworks, like those developed in HEP. Such frameworks often manifest several orthogonal axes of contracts (Crosscutting Concerns - CC) leading to complex multidepenencies. Currently used programing languages and development methodologies don't allow to easily identify and...
    Go to contribution page
  351. I. Kisel (UNIVERSITY OF HEIDELBERG, KIRCHHOFF INSTITUTE OF PHYSICS)
    30/09/2004, 14:20
    Track 2 - Event processing
    oral presentation
    Typical central Au-Au collision in the CBM experiment (GSI, Germany) will produce up to 700 tracks in the inner tracker. Large track multiplicity together with presence of nonhomogeneous magnetic field make reconstruction of events complicated. A cellular automaton method is used to reconstruct tracks in the inner tracker. The cellular automaton algorithm creates short track segments...
    Go to contribution page
  352. Jerome LAURET (BROOKHAVEN NATIONAL LABORATORY)
    30/09/2004, 14:20
    Track 4 - Distributed Computing Services
    oral presentation
    While many success stories can be told as a product of the Grid middleware developments, most of the existing systems relying on workflow and job execution are based on integration of self-contained production systems interfacing with a given scheduling component or portal, or directly uses the base component of the Grid middleware (globus-job-run, globus-job-submit). However, such systems...
    Go to contribution page
  353. M. Sutton (UNIVERSITY COLLEGE LONDON)
    30/09/2004, 14:20
    Track 2 - Event processing
    oral presentation
    The current design, implementation and performance of the ZEUS global tracking trigger barrel algorithm are described. The ZEUS global tracking trigger integrates track information from the ZEUS central tracking chamber (CTD) and micro vertex detector (MVD) to obtain a global picture of the track topology in the ZEUS detector at the second level trigger stage. Algorithm processing is...
    Go to contribution page
  354. C. Tull (LBNL/ATLAS)
    30/09/2004, 14:40
    Track 3 - Core Software
    oral presentation
    In this paper we will discuss how Aspect-Oriented Programming (AOP) can be used to implement and extend the functionality of HEP architectures in areas such as performance monitoring, constraint checking, debugging and memory management. AOP is the latest evolution in the line of technology for functional decomposition which includes Structured Programming (SP) and Object-Oriented...
    Go to contribution page
  355. Dr S. Ravot (Caltech)
    30/09/2004, 14:40
    Track 7 - Wide Area Networking
    oral presentation
    In this paper we describe the current state of the art in equipment, software and methods for transferring large scientific datasets at high speed around the globe. We first present a short introductory history of the use of networking in HEP, some details on the evolution, current status and plans for the Caltech/CERN/DataTAG transAtlantic link, and a description of the topology and...
    Go to contribution page
  356. N. Graf (SLAC)
    30/09/2004, 14:40
    Track 2 - Event processing
    oral presentation
    We describe a Java toolkit for full event reconstruction and analysis. The toolkit is currently being used for detector design and physics analysis for a future linear e+ e- linear collider. The components are fully modular and are available for tasks from digitization of tracking detector signals through to cluster finding, pattern recognition, fitting, jetfinding, and analysis. We...
    Go to contribution page
  357. R. Cavanaugh (UNIVERSITY OF FLORIDA)
    30/09/2004, 14:40
    Track 4 - Distributed Computing Services
    oral presentation
    A grid consists of high-end computational, storage, and network resources that, while known a priori, are dynamic with respect to activity and availability. Efficient co-scheduling of requests to use grid resources must adapt to this dynamic environment while meeting administrative policies. We discusses the necessary requirements of such a scheduler and introduce a distributed...
    Go to contribution page
  358. Julia ANDREEVA (CERN)
    30/09/2004, 14:40
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The ARDA project was started in April 2004 to support the four LHC experiments (ALICE, ATLAS, CMS and LHCb) in the implementation of individual production and analysis environments based on the EGEE middleware. The main goal of the project is to allow a fast feedback between the experiment and the middleware development teams via the construction and the usage of end-to-end...
    Go to contribution page
  359. V. Tsulaia (UNIVERSITY OF PITTSBURGH)
    30/09/2004, 14:40
    Track 2 - Event processing
    oral presentation
    The GeoModel toolkit is a library of geometrical primitives that can be used to describe detector geometries. The toolkit is designed as a data layer, and especially optimized in order to be able to describe large and complex detector systems with minimum memory consumption. Some of the techniques used to minimize the memory consumption are: shared instancing with reference counting,...
    Go to contribution page
  360. S. MUZAFFAR (NorthEastern University, Boston, USA)
    30/09/2004, 15:00
    Track 2 - Event processing
    oral presentation
    This paper describes recent developments in the IGUANA (Interactive Graphics for User ANAlysis) project. IGUANA is a generic framework and toolkit, used by CMS and D0, to build a variety of interactive applications such as detector and event visualisation and interactive GEANT3 and GEANT4 browsers. IGUANA is a freely available toolkit based on open-source components including...
    Go to contribution page
  361. 30/09/2004, 15:00
    Track 4 - Distributed Computing Services
    oral presentation
    The R-GMA (Relational Grid Monitoring Architecture) was developed within the EU DataGrid project, to bring the power of SQL to an information and monitoring system for the grid. It provides producer and consumer services to both publish and retrieve information from anywhere within a grid environment. Users within a Virtual Organization may define their own tables dynamically into...
    Go to contribution page
  362. Vincenzo Innocente (CERN)
    30/09/2004, 15:00
    Track 3 - Core Software
    oral presentation
    Bitmap indices have gained wide acceptance in data warehouse applications handling large amounts of read only data. High dimensional ad hoc queries can be efficiently performed by utilizing bitmap indices, especially if the queries cover only a subset of the attributes stored in the database. Such access patterns are common use in HEP analysis. Bitmap indices have been implemented by...
    Go to contribution page
  363. M. Ballintijn (MIT)
    30/09/2004, 15:00
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The Parallel ROOT Facility, PROOF, enables a physicist to analyze and understand very large data sets on an interactive time scale. It makes use of the inherent parallelism in event data and implements an architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. Scaling to many hundreds of servers is essential to process tens or hundreds of...
    Go to contribution page
  364. Dr J. Tanaka (ICEPP, UNIVERSITY OF TOKYO)
    30/09/2004, 15:00
    Track 7 - Wide Area Networking
    oral presentation
    We have measured the performance of data transfer between CERN and our laboratory, ICEPP, at the University of Tokyo in Japan. The ICEPP will be one of the so-called regional centers for handling the data from the ATLAS experiment which will start data taking in 2007. More than petabytes of data are expected to be generated from the experiment each year. It is therefore essential to...
    Go to contribution page
  365. W. Liebig (CERN)
    30/09/2004, 15:00
    Track 2 - Event processing
    oral presentation
    The athena software framework for event reconstruction in ATLAS will be employed to analyse the data from the 2004 combined test beam. In this combined test beam, a slice of the ATLAS detector is operated and read out under conditions similar to future LHC running, thus providing a test-bed for the complete reconstruction chain. First results for the ATLAS InnerDetector will be...
    Go to contribution page
  366. D. Adams (BNL)
    30/09/2004, 15:20
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The ATLAS distributed analysis (ADA) system is described. The ATLAS experiment has more that 2000 physicists from 150 insititutions in 34 countries. Users, data and processing are distributed over these sites. ADA makes use of a collection of high-level web services whose interfaces are expressed in terms of AJDL (abstract job definition language) which includes descriptions of...
    Go to contribution page
  367. L. Moneta (CERN)
    30/09/2004, 15:20
    Track 3 - Core Software
    oral presentation
    The main objective of the MathLib project is to give expertise and support to the LHC experiments on mathematical and statistical computational methods. The aim is to provide a coherent set of mathematical libraries. Users of this set of libraries are developers of experiment reconstruction and simulation software, of analysis tools frameworks, such as ROOT, and physicists performing...
    Go to contribution page
  368. M. Sgaravatto (INFN Padova)
    30/09/2004, 15:20
    Track 4 - Distributed Computing Services
    oral presentation
    Resource management and scheduling of distributed, data-driven applications in a Grid environment are challenging problems. Although significant results were achieved in the past few years, the development and the proper deployment of generic, reliable, standard components present issues that still need to be completely solved. Interested domains include workload management,...
    Go to contribution page
  369. Dr Y. Kodama (NATIONAL INSTITUTE OF ADVANCED INDUSTRIAL SCIENCE AND TECHNOLOGY (AIST))
    30/09/2004, 15:20
    Track 7 - Wide Area Networking
    oral presentation
    It is important that the total bandwidth of the multiple streams should not exceed the network bandwidth in order to achieve a stable network flow with high performance in high bandwidth-delay product networks. Software control of bandwidth for each stream sometimes exceed the specified bandwidth. We proposed the hardware control technique for total bandwidth of multiple streams with...
    Go to contribution page
  370. J. Drohan (University College London)
    30/09/2004, 15:20
    Track 2 - Event processing
    oral presentation
    We describe the philosophy and design of Atlantis, an event visualisation program for the ATLAS experiment at CERN. Written in Java, it employs the Swing API to provide an easily configurable Graphical User Interface. Atlantis implements a collection of intuitive, data-orientated 2D projections, which enable the user to quickly understand and visually investigate complete ATLAS events....
    Go to contribution page
  371. Mr M. Ivanov (CERN)
    30/09/2004, 15:20
    Track 2 - Event processing
    oral presentation
    Tracks finding and fitting algorithm in ALICE Time projection chamber (TPC) and Inner Tracking System (ITS) based on the Kalman-filtering are presented. The filtering algorithm is able to cope with non-Gaussian noise and ambiguous measurements in high-density environments. The tracking algorithm consists of two parts: one for the TPC and one for the prolongation into the ITS. The...
    Go to contribution page
  372. F. van Lingen (CALIFORNIA INSTITUTE OF TECHNOLOGY)
    30/09/2004, 15:40
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    In this paper we report on the implementation of an early prototype of distributed high-level services supporting grid-enabled data analysis within the LHC physics community as part of the ARDA project within the context of the GAE (Grid Analysis Environment) and begin to investigate the associated complex behaviour of such an end-to-end system. In particular, the prototype...
    Go to contribution page
  373. B K. Kim (UNIVERSITY OF FLORIDA), M. Mambelli (University of Chicago)
    30/09/2004, 15:40
    Track 4 - Distributed Computing Services
    oral presentation
    Grid computing involves the close coordination of many different sites which offer distinct computational and storage resources to the Grid user community. The resources at each site need to be monitored continuously. Static and dynamic site information need to be presented to the user community in a simple and efficient manner. This paper will present both the design and...
    Go to contribution page
  374. Dr G B. Barrand (CNRS / IN2P3 / LAL)
    30/09/2004, 15:40
    Track 2 - Event processing
    oral presentation
    Panoramix is an event display for LHCb. LaJoconde is an interactive environment over DaVinci, the analysis software layer for LHCb. We shall present global technological choices behind these two softwares : GUI, graphic, scripting, plotting. We shall present the connection to the framework (Gaudi), how we can integrate other tools like hippodraw. We shall present the overall...
    Go to contribution page
  375. D. Brown (LAWRENCE BERKELEY NATIONAL LAB)
    30/09/2004, 15:40
    Track 2 - Event processing
    oral presentation
    This talk will describe the new analysis computing model deployed by BaBar over the past year. The new model was designed to better support the current and future needs of physicists analyzing data, and to improve BaBar's analysis computing efficiency. The use of RootIO in the new model is described in other talks. Babar's new analysis data content format contains both high and low...
    Go to contribution page
  376. M. Fischler (FERMILAB)
    30/09/2004, 15:40
    Track 3 - Core Software
    oral presentation
    A new object-oriented Minimization package is available via the ZOOM cvs repository. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little...
    Go to contribution page
  377. Mr M. Grigoriev (FERMILAB, USA)
    30/09/2004, 15:40
    Track 7 - Wide Area Networking
    oral presentation
    Large, distributed HEP collaborations, such as D0, CDF and US-CMS, depend on stable and robust network paths between major world research centers. The evolving emphasis on data and compute Grids increases the reliance on network performance. FermiLab's experimental groups and network support personnel identified a critical need for WAN monitoring to ensure the quality and efficient...
    Go to contribution page
  378. G. Asova (DESY ZEUTHEN)
    30/09/2004, 16:30
    Track 3 - Core Software
    oral presentation
    The photo injector test facility at DESY Zeuthen (PITZ) was built to develop, operate and optimize photo injectors for future free electron lasers and linear colliders. In PITZ we use a DAQ system that stores data as a collection of ROOT files, forming our database for offline analysis. Consequently, the offline analysis will be performed by a ROOT application, written at least...
    Go to contribution page
  379. 30/09/2004, 16:30
    INTAS ( http://www.intas.be): International Association for the promotion of co-operation with scientists from the New Independent States of the former Soviet Union (NIS). INTAS encourages joint activities between its INTAS Members and the NIS in all exact and natural sciences, economics, human and social sciences. INTAS supports a number of NIS participants to attend the 2004...
    Go to contribution page
  380. I. Legrand (CALTECH)
    30/09/2004, 16:30
    Track 4 - Distributed Computing Services
    oral presentation
    The MonALISA (MONitoring Agents in A Large Integrated Services Architecture) system is a scalable Dynamic Distributed Services Architecture which is based on the mobile code paradigm. An essential part of managing a global system, like the Grids, is a monitoring system that is able to monitor and track the many site facilities, networks, and all the task in progress, in real time....
    Go to contribution page
  381. A. FARILLA (I.N.F.N. ROMA3)
    30/09/2004, 16:30
    Track 2 - Event processing
    oral presentation
    A full slice of the barrel detector of the ATLAS experiment at the LHC is being tested this year with beams of pions, muons, electrons and photons in the energy range 1-300 GeV in the H8 area of the CERN SPS. It is a challenging exercise since, for the first time, the complete software suite developed for the full ATLAS experiment has been extended for use with real detector data,...
    Go to contribution page
  382. E. Ronchieri (INFN CNAF)
    30/09/2004, 16:30
    Track 7 - Wide Area Networking
    oral presentation
    The problem of finding the best match between jobs and computing resources is critical for an efficient work load distribution in Grids. Very often jobs are preferably run on the Computing Elements (CEs) that can retrieve a copy of the input files from a local Storage Element (SE). This requires that multiple file copies are generated and managed by a data replication system. We...
    Go to contribution page
  383. 30/09/2004, 16:30
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    Any physicist who will analyse data from the LHC experiments will have to deal with data and computing resources which are distributed across multiple locations and with different access methods. GANGA helps the end user by tying in specifically to the solutions for a given experiment ranging from specification of data to retrieval and post-processing of produced output. For LHCb and ATLAS...
    Go to contribution page
  384. M. Donszelmann (SLAC)
    30/09/2004, 16:30
    Track 2 - Event processing
    oral presentation
    WIRED 4 is a experiment independent event display plugin module for JAS 3 (Java Analysis Studio) generic analysis framework. Both WIRED and JAS are written in Java. WIRED, which uses HepRep (HEP Representables for Event Display) as its input format, supports viewing of events using either conventional 3D projections as well as specialized projections such as a fish-eye or a rho-Z...
    Go to contribution page
  385. 30/09/2004, 16:50
    Track 2 - Event processing
    oral presentation
    A kinematic fit package was developed based on Least Means Squared minimization with Lagrange multipliers and Kalman filter techniques and implemented in the framework of the CMS reconstruction program. The package allows full decay chain reconstruction from final state to primary vertex according to the given decay model. The class framework allowing decay tree description on every...
    Go to contribution page
  386. Andreas PFEIFFER (CERN)
    30/09/2004, 16:50
    Track 3 - Core Software
    oral presentation
    CLHEP is a set of HEP-specific foundation and utility classes such as random number generators, physics vectors, and particle data tables. Although CLHEP has traditionally been distributed as one large library, the user community has long wanted to build and use CLHEP packages separately. With the release of CLHEP 1.9, CLHEP has been reorganized and enhanced to enable building and...
    Go to contribution page
  387. N. De Bortoli (INFN - NAPLES (ITALY))
    30/09/2004, 16:50
    Track 4 - Distributed Computing Services
    oral presentation
    GridICE is a monitoring service for the Grid, it measures significant Grid related resources parameters in order to analyze usage, behavior and performance of the Grid and/or to detect and notify fault situations, contract violations, and user-defined events. In its first implementation, the notification service relies on a simple model based on a pre-defined set of events. The growing...
    Go to contribution page
  388. P. DeMar (FERMILAB)
    30/09/2004, 16:50
    Track 7 - Wide Area Networking
    oral presentation
    Advanced optical-based networks have the capacity and capability to meet the extremely large data movement requirements of particle physics collaborations. To date, research efforts in the advanced network area have been primarily been focused on provisioning, dynamically configuring, and monitoring the wide area optical network infrastructure itself. Application use of these...
    Go to contribution page
  389. Dr J. LIST (University of Wuppertal)
    30/09/2004, 16:50
    Track 2 - Event processing
    oral presentation
    Analyses in high-energy physics often involve the filling of large amounts of histograms from n-tuple like data structures, e.g. RooT trees. Even when using an object-oriented framework like RooT, a the user code often follows a functional programming approach, where booking, application of cuts, calculation of weights and histogrammed quantities and finally the filling of the...
    Go to contribution page
  390. N. De Filippis (UNIVERSITA' DEGLI STUDI DI BARI AND INFN)
    30/09/2004, 16:50
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    During the CMS Data Challenge 2004 a realtime analysis was attempted at INFN and PIC Tier-1 and Tier-2s in order to test the ability of the instrumented methods to quickly process the data. Several agents and automatic procedures were implemented to perform the analysis at the Tier-1/2 synchronously with the data transfer from Tier-0 at CERN. The system was implemented in the Grid...
    Go to contribution page
  391. Dr T. Speer (UNIVERSITY OF ZURICH, SWITZERLAND)
    30/09/2004, 17:10
    Track 2 - Event processing
    oral presentation
    A vertex fit algorithm was developed based on the Gaussian-sum filter (GSF) and implemented in the framework of the CMS reconstruction program. While linear least-squares estimators are optimal in case all observation errors are Gaussian distributed, the GSF offers a better treatment of the non-Gaussian distribution of track parameter errors when these are modeled by Gaussian...
    Go to contribution page
  392. D. Smith (STANFORD LINEAR ACCELERATOR CENTER)
    30/09/2004, 17:10
    Track 4 - Distributed Computing Services
    oral presentation
    The BaBar experiment has migrated its event store from an objectivity based system to a system using ROOT-files, and along with this has developed a new bookkeeping design. This bookkeeping now combines data production, quality control, event store inventory, distribution of BaBar data to sites and user analysis in one central place, and is based on collections of data stored as...
    Go to contribution page
  393. R. Hughes-Jones (THE UNIVERSITY OF MANCHESTER)
    30/09/2004, 17:10
    Track 7 - Wide Area Networking
    oral presentation
    How do we get High Throughput data transport to real users? The MB-NG project is a major collaboration which brings together expertise from users, industry, equipment providers and leading edge e-science application developers. Major successes in the areas of Quality of Service (QoS) and managed bandwidth have provided a leading edge U.K. Diffserv enabled network running at 2.5 Gbit/s....
    Go to contribution page
  394. K. Wu (LAWRENCE BERKELEY NATIONAL LAB)
    30/09/2004, 17:10
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    Nuclear and High Energy Physics experiments such as STAR at BNL are generating millions of files with PetaBytes of data each year. In most cases, analysis programs have to read all events in a file in order to find the interesting ones. Since most analyses are only interested in some subsets of events in a number of files, a significant portion of the computer time is wasted on...
    Go to contribution page
  395. Dr P. MATO (CERN)
    30/09/2004, 17:10
    Track 2 - Event processing
    oral presentation
    Bender, the Python based physics analysis application for LHCb combines the best features of underlying Gaudi C++ software architecture with the flexibility of Python scripting language and provides end-users with friendly physics analysis oriented environment. It is based in one hand, on the generic Python bindings for the Gaudi framework, called GaudiPython, and in the other hand on an...
    Go to contribution page
  396. E. Ronchieri (INFN CNAF)
    30/09/2004, 17:10
    Track 3 - Core Software
    oral presentation
    We described the process for handling software builds and realeases for the Workload Management package of the DataGrid project. The software development in the project was shared among nine contractual partners, in seven different countries, and was organized in work-packages covering different areas. In this paper, we discuss how a combination of Concurrent Version System,...
    Go to contribution page
  397. M. Sanchez-Garcia (UNIVERSITY OF SANTIAGO DE COMPOSTELA)
    30/09/2004, 17:30
    Track 4 - Distributed Computing Services
    oral presentation
    The LHCb Data Challenge 04 includes the simulation of over 200 M simulated events using distributed computing resources on N sites and extending along 3 months. To achieve this goal a dedicated Production grid (DIRAC) has been deployed. We will present the Job Monitoring and Accounting services developed to follow the status of the production along its way and to evaluate the results at...
    Go to contribution page
  398. M.G. Pia (INFN GENOVA)
    30/09/2004, 17:30
    Track 2 - Event processing
    oral presentation
    Statistical methods play a significant role throughout the life- cycle of HEP experiments, being an essential component of physics analysis. We present a project in progress for the development of an object-oriented software toolkit for statistical data analysis. More in particular, the Statistical Comparison component of the toolkit provides algorithms for the comparison of data...
    Go to contribution page
  399. Mr G. Roediger (CORPORATE COMPUTER SERVICES INC. - FERMILAB)
    30/09/2004, 17:30
    Track 7 - Wide Area Networking
    oral presentation
    A High Energy Physics experiment has between 200 and 1000 collaborating physicists from nations spanning the entire globe. Each collaborator brings a unique combination of interests, and each has to search through the same huge heap of messages, research results, and other communication to find what is useful. Too much scientific information is as useless as too little. It is time...
    Go to contribution page
  400. T. Johnson (SLAC)
    30/09/2004, 17:30
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    The aim of the service is to allow fully distributed analysis of large volumes of data while maintaining true (sub-second) interactivity. All the Grid related components are based on OGSA style Grid services, and to the maximum extent uses existing Globus Toolkit 3.0 (GT3) services. All transactions are authenticated and authorized using GSI (Grid Security Infrastructure) mechanism -...
    Go to contribution page
  401. A. Pfeiffer (CERN, PH/SFT)
    30/09/2004, 17:30
    Track 3 - Core Software
    oral presentation
    In the context of the SPI project in the LCG Application Area, a centralized s/w management infrastructure has been deployed. It comprises of a suite of scripts handling the building and validating of the releases of the various projects as well as providing a customized packaging of the released s/w. Emphasis was put on the flexibility of the packaging and distribution solution as it...
    Go to contribution page
  402. A. Wildauer (UNIVERSITY OF INNSBRUCK)
    30/09/2004, 17:30
    Track 2 - Event processing
    oral presentation
    For physics analysis in ATLAS, reliable vertex finding and fitting algorithms are important. In the harsh enviroment of the LHC (~ 23 inelastic collissions every 25 ns) this task turns out to be particularily challenging. One of the guiding principles in developing the vertexing packages is a strong focus on modularity and defined interfaces using the advantages of object oriented C++....
    Go to contribution page
  403. G R. Moloney
    30/09/2004, 17:50
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    We have developed and deployed a data grid for the processing of data from the Belle experiment, and for the production of simulated Belle data. The Belle Analysis Data Grid brings together compute and storage resources across five separate partners in Australia, and the Computing Research Centre at the KEK laboratory in Tsukuba, Japan. The data processing resouces are general...
    Go to contribution page
  404. Mr P. Galvez (CALTECH)
    30/09/2004, 17:50
    Track 7 - Wide Area Networking
    oral presentation
    VRVS (Virtual Room Videoconferencing System) is a unique, globally scalable next-generation system for real-time collaboration by small workgroups, medium and large teams engaged in research, education and outreach. VRVS operates over an ensemble of national and international networks. Since it went into production service in early 1997, VRVS has become a standard part of the toolset used...
    Go to contribution page
  405. I. Belikov (CERN)
    30/09/2004, 17:50
    Track 2 - Event processing
    oral presentation
    One of the main features of the ALICE detector at LHC is the capability to identify particles in a very broad momentum range from 0.1 GeV/c up to 10 GeV/c. This can be achieved only by combining, within a common setup, several detecting systems that are efficient in some narrower and complementary momentum sub- ranges. The situation is further complicated by the amount of data to be...
    Go to contribution page
  406. E. Efstathiadis (BROOKHAVEN NATIONAL LABORATORY)
    30/09/2004, 17:50
    Track 4 - Distributed Computing Services
    oral presentation
    As a PPDG cross-team joint project, we proposed to study, develop, implement and evaluate a set of tools that allow Meta-Schedulers to take advantage of consistent information (such as information needed for complex decision making mechanisms) across both local and/or Grid Resource Management Systems (RMS). We will present and define the requirements and schema by which one can...
    Go to contribution page
  407. V. Serbo (SLAC)
    30/09/2004, 17:50
    Track 2 - Event processing
    oral presentation
    JASSimApp is joint project of SLAC, KEK, and Naruto University to create integrated GUI for Geant4, based on JAS3 framework, with ability to interactively: - Edit Geant4 geometry, materials, and physics processes - Control Geant4 execution, local and remote: pass commands and receive output, control event loop - Access AIDA histograms defined in Geant4 - Show generated...
    Go to contribution page
  408. M. GALLAS (CERN)
    30/09/2004, 17:50
    Track 3 - Core Software
    oral presentation
    Software Quality Assurance is an integral part of the software development process of the LCG Project and includes several activities such as automatic testing, test coverage reports, static software metrics reports, bug tracker, usage statistics and compliance to build, code and release policies. As a part of QA activity all levels of the sw-testing should be run as...
    Go to contribution page
  409. A. TSAREGORODTSEV (CNRS-IN2P3-CPPM, MARSEILLE)
    30/09/2004, 18:10
    Track 4 - Distributed Computing Services
    oral presentation
    DIRAC is the LHCb distributed computing grid infrastructure for MC production and analysis. Its architecture is based on a set of distributed collaborating services. The service decomposition broadly follows the ARDA project proposal, allowing for the possibility of interchanging the EGEE/ARDA and DIRAC components in the future. Some components developed outside the DIRAC project are...
    Go to contribution page
  410. E. Vaandering (VANDERBILT UNIVERSITY)
    30/09/2004, 18:10
    Track 2 - Event processing
    oral presentation
    Genetic programming is a machine learning technique, popularized by Koza in 1992, in which computer programs which solve user-posed problems are automatically discovered. Populations of programs are evaluated for their fitness of solving a particular problem. New populations of ever increasing fitness are generated by mimicking the biological processes underlying evolution. These...
    Go to contribution page
  411. A. Sill (TEXAS TECH UNIVERSITY)
    30/09/2004, 18:10
    Track 5 - Distributed Computing Systems and Experiences
    oral presentation
    To maximize the physics potential of the data currently being taken, the CDF collaboration at Fermi National Accelerator Laboratory has started to deploy user analysis computing facilities at several locations throughout the world. Over 600 users are signed up and able to submit their physics analysis and simulation applications directly from their desktop or laptop computers to these...
    Go to contribution page
  412. G. Eulisse (NORTHEASTERN UNIVERSITY OF BOSTON (MA) U.S.A.)
    30/09/2004, 18:10
    Track 3 - Core Software
    oral presentation
    A fundamental part of software development is to detect and analyse weak spots of the programs to guide optimisation efforts. We present a brief overview and usage experience on some of the most valuable open- source tools such as valgrind and oprofile. We describe their main strengths and weaknesses as experienced by the CMS experiment. As we have found that these tools do not satisfy...
    Go to contribution page
  413. Mrs L. Ma (INSTITUTE OF HIGH ENERGY PHYSICS)
    30/09/2004, 18:10
    Track 7 - Wide Area Networking
    oral presentation
    Network security at IHEP is becoming one of the most important issues of computing environment. To protect its computing and network resources against attacks and viruses from outside of the institute, security measures to combat these are implemented. To enforce security policy the network infrastructure was re-configured to one intranet and two DMZ areas. New rules to control the...
    Go to contribution page
  414. Mark DONSZELMANN (Extensions to JAS)
    30/09/2004, 18:10
    Track 2 - Event processing
    oral presentation
    JAS3 is a general purpose, experiment independent, open-source, data analysis tool. JAS3 includes a variety of features, including histograming, plotting, fitting, data access, tuple analysis, spreadsheet and event display capabilities. More complex analysis can be performed using several scripting languages (pnuts, jython, etc.), or by writing Java analysis classes. All of these...
    Go to contribution page
  415. Dr Pierre Vande Vyvre (CERN)
    01/10/2004, 08:30
    Plenary Sessions
    oral presentation
  416. Stephen Gowdy (SLAC)
    01/10/2004, 08:55
    Plenary Sessions
    oral presentation
  417. Philippe Canal (FNAL)
    01/10/2004, 09:20
    Plenary Sessions
    oral presentation
  418. Massimo LAMANNA (CERN)
    01/10/2004, 09:45
    Plenary Sessions
    oral presentation
  419. Douglas OLSON
    01/10/2004, 10:40
    Plenary Sessions
    oral presentation
  420. Tim Smith (CERN)
    01/10/2004, 11:05
    Plenary Sessions
    oral presentation
  421. Peter CLARKE
    01/10/2004, 11:30
    Plenary Sessions
    oral presentation
  422. L. BAUERDICK (FNAL)
    01/10/2004, 11:55
    Plenary Sessions
    oral presentation
  423. Wolfgang von Rueden (CERN/ALE)
    01/10/2004, 12:25
    Plenary Sessions
    oral presentation