27 September 2004 to 1 October 2004
Interlaken, Switzerland
Europe/Zurich timezone

Session

Online Computing

3
27 Sept 2004, 14:00
Interlaken, Switzerland

Interlaken, Switzerland

Presentation materials

There are no materials yet.

  1. T.M. Steinbeck (KIRCHHOFF INSTITUTE OF PHYSICS, RUPRECHT-KARLS-UNIVERSITY HEIDELBERG, for the Alice Collaboration)
    27/09/2004, 14:00
    Track 1 - Online Computing
    oral presentation
    The Alice High Level Trigger (HLT) is foreseen to consist of a cluster of 400 to 500 dual SMP PCs at the start-up of the experiment. It's input data rate can be up to 25GB/s. This has to be reduced to at most 1.2 GB/s before the data is sent to DAQ through event selection, filtering, and data compression. For these processing purposes, the data is passed through the cluster in...
    Go to contribution page
  2. M. Sutton (UNIVERSITY COLLEGE LONDON)
    27/09/2004, 14:20
    Track 1 - Online Computing
    oral presentation
    The architecture and performance of the ZEUS Global Track Trigger (GTT) are described. Data from the ZEUS silicon Micro Vertex detector's HELIX readout chips, corresponding to 200k channels, are digitized by 3 crates of ADCs and PowerPC VME board computers push cluster data for second level trigger processing and strip data for event building via Fast and GigaEthernet network...
    Go to contribution page
  3. A. Di Mattia (INFN)
    27/09/2004, 14:40
    Track 1 - Online Computing
    oral presentation
    The Atlas Level-2 trigger provides a software-based event selection after the initial Level-1 hardware trigger. For the muon events, the selection is decomposed in a number of broad steps: first, the Muon Spectrometer data are processed to give physics quantities associated to the muon track (standalone features extraction) then, other detector data are used to refine the extracted...
    Go to contribution page
  4. M. Ye (INSTITUTE OF HIGH ENERGY PHYSICS, ACADEMIA SINICA)
    27/09/2004, 15:00
    Track 1 - Online Computing
    oral presentation
    This article introduces a Embedded Linux System based on vme series PowerPC as well as the base method on how to establish the system. The goal of the system is to build a test system of VMEbus device. It also can be used to setup the data acquisition and control system. Two types of compiler are provided by the developer system according to the features of the system and the...
    Go to contribution page
  5. G. CHEN (COMPUTING CENTER,INSTITUTE OF HIGH ENERGY PHYSICS,CHINESE ACADEMY OF SCIENCES)
    27/09/2004, 15:20
    Track 1 - Online Computing
    oral presentation
    BES is an experiment on Beijing Electron-Positron Collider (BEPC). BES computing environment consists of PC/Linux cluster and mainly relies on the free software. OpenPBS and Ganglia are used as job schedule and monitor system. With helps from CERN IT Division, CASTOR was implemented as storage management system. BEPC is being upgraded and luminosity will increase one hundred times...
    Go to contribution page
  6. H-J. Mathes (FORSCHUNGSZENTRUM KARLSRUHE, INSTITUT FรผR KERNPHYSIK)
    27/09/2004, 15:40
    Track 1 - Online Computing
    oral presentation
    S.Argiro`(1), A. Kopmann (2), O.Martineau (2), H.-J. Mathes (2) for the Pierre Auger Collaboration (1) INFN, Sezione Torino (2) Forschungszentrum Karlsruhe The Pierre Auger Observatory currently under construction in Argentina will investigate extensive air showers at energies above 10^18 eV. It consists of a ground array of 1600 Cherenkov water detectors and 24 fluorescence...
    Go to contribution page
  7. I. Sourikova (BROOKHAVEN NATIONAL LABORATORY)
    27/09/2004, 16:30
    Track 1 - Online Computing
    oral presentation
    To benefit from substantial advancements in Open Source database technology and ease deployment and development concerns with Objectivity/DB, the Phenix experiment at RHIC is migrating its principal databases from Objectivity to a relational database management system (RDBMS). The challenge of designing a relational DB schema to store a wide variety of calibration classes was ...
    Go to contribution page
  8. D. Winter (COLUMBIA UNIVERSITY)
    27/09/2004, 16:50
    Track 1 - Online Computing
    oral presentation
    The PHENIX detector consists of 14 detector subsystems. It is designed such that individual subsystems can be read out independently in parallel as well as a single unit. The DAQ used to read the detector is a highly-pipelined parallel system. Because PHENIX is interested in rare physics events, the DAQ is required to have a fast trigger, deep buffering, and very high bandwidth. The...
    Go to contribution page
  9. D Chapin (Brown University)
    27/09/2004, 17:10
    Track 1 - Online Computing
    oral presentation
    The DZERO Level 3 Trigger and Data Aquisition (L3DAQ) system has been running continuously since Spring 2002. DZERO is loacated at one of the two interaction points in the Fermilab Tevatron Collider. The L3DAQ moves front-end readout data from VME crates to a trigger processor farm. It is built upon a Cisco 6509 Ethernet switch, standard PCs, and commodity VME single board computers. We...
    Go to contribution page
  10. M. ZUREK (CERN, IFJ KRAKOW)
    27/09/2004, 17:30
    Track 1 - Online Computing
    oral presentation
    The talk presents the experience gathered during the testbed administration (~100 PC and 15+ switches) for the ATLAS Experiment at CERN. It covers the techniques used to resolve the HW/SW conflicts, network related problems, automatic installation and configuration of the cluster nodes as well as system/service monitoring in the heterogeneous dynamically changing...
    Go to contribution page
  11. M. Dobson (CERN)
    27/09/2004, 17:50
    Track 1 - Online Computing
    oral presentation
    The ATLAS collaboration had a Combined Beam Test from May until October 2004. Collection and analysis of data required integration of several software systems that are developed as prototypes for the ATLAS experiment, due to start in 2007. Eleven different detector technologies were integrated with the Data Acquisition system and were taking data synchronously. The DAQ was integrated...
    Go to contribution page
  12. G. unel (UNIVERSITY OF CALIFORNIA AT IRVINE AND CERN)
    27/09/2004, 18:10
    Track 1 - Online Computing
    oral presentation
    The ATLAS Trigger and DAQ system is designed to use the Region of Interest (RoI)mechanism to reduce the initial Level 1 trigger rate of 100 kHz down to about 3.3 kHz Event Building rate. The DataFlow component of the ATLAS TDAQ system is responsible for the reading of the detector specific electronics via 1600 point to point readout links, the collection and provision of RoI to the...
    Go to contribution page
  13. R. Itoh (KEK)
    29/09/2004, 14:00
    Track 1 - Online Computing
    oral presentation
    A sizeable increase in the machine luminosity of KEKB accelerator is expected in coming years. This may result in a shortage in the data storage resource for the Belle experiment in the near future and it is desired to reduce the data flow as much as possible before writing the data to the storage device. For this purpose, a realtime event reconstruction farm has been installed in...
    Go to contribution page
  14. M. Richter (Department of Physics and Technology, University of Bergen, Norway)
    29/09/2004, 14:20
    Track 1 - Online Computing
    oral presentation
    The ALICE experiment at LHC will implement a High Level Trigger System, where the information from all major detectors are combined, including the TPC, TRD, DIMUON, ITS etc. The largest computing challenge is imposed by the TPC, requiring realtime pattern recognition. The main task is to reconstruct the tracks in the TPC, and in a final stage combine the tracking information from all...
    Go to contribution page
  15. P. Sheldon (VANDERBILT UNIVERSITY)
    29/09/2004, 14:40
    Track 1 - Online Computing
    oral presentation
    The BTeV experiment, a proton/antiproton collider experiment at the Fermi National Accelerator Laboratory, will have a trigger that will perform complex computations (to reconstruct vertices, for example) on every collision (as opposed to the more traditional approach of employing a first level hardware based trigger). This trigger requires large-scale fault adaptive embedded software: ...
    Go to contribution page
  16. R. Panse (KIRCHHOFF INSTITUTE FOR PHYSICS - UNIVERSITY OF HEIDELBERG)
    29/09/2004, 15:00
    Track 1 - Online Computing
    oral presentation
    Super-computers will be replaced more and more by PC cluster systems. Also future LHC experiments will use large PC clusters. These clusters will consist of off-the-shelf PCs, which in general are not built to run in a PC farm. Configuring, monitoring and controlling such clusters requires a serious amount of time consuming and administrative effort. We propose a cheap and easy...
    Go to contribution page
  17. A. Campbell (DESY)
    29/09/2004, 15:20
    Track 1 - Online Computing
    oral presentation
    We present the scheme in use for online high level filtering, event reconstruction and classification in the H1 experiment at HERA since 2001. The Data Flow framework ( presented at CHEP2001 ) will be reviewed. This is based on CORBA for all data transfer, multi-threaded C++ code to handle the data flow and synchronisation and fortran code for reconstruction and event selection. A...
    Go to contribution page
  18. T. Shears (University of Liverpool)
    29/09/2004, 15:40
    Track 1 - Online Computing
    oral presentation
    The Level 1 and High Level triggers for the LHCb experiment are software triggers which will be implemented on a farm of about 1800 CPUs, connected to the detector read-out system by a large Gigabit Ethernet LAN with a capacity of 8 Gigabyte/s and some 500 Gigabit Ethernet links. The architecture of the readout network must be designed to maximise data throughput, control data flow,...
    Go to contribution page
  19. V. Gyurjyan (Jefferson Lab)
    29/09/2004, 16:30
    Track 1 - Online Computing
    oral presentation
    A general overview of the Jefferson Lab data acquisition run control system is presented. This run control system is designed to operate the configuration, control, and monitoring of all Jefferson Lab experiments. It controls data-taking activities by coordinating the operation of DAQ sub-systems, online software components and third-party software such as external slow control...
    Go to contribution page
  20. F. Carena (CERN)
    29/09/2004, 16:50
    Track 1 - Online Computing
    oral presentation
    The Experiment Control System (ECS) is the top level of control of the ALICE experiment. Running an experiment implies performing a set of activities on the online systems that control the operation of the detectors. In ALICE, online systems are the Trigger, the Detector Control Systems (DCS), the Data-Acquisition System (DAQ) and the High-Level Trigger (HLT). The ECS provides a...
    Go to contribution page
  21. D. Liko (CERN)
    29/09/2004, 17:10
    Track 1 - Online Computing
    oral presentation
    The unprecedented size and complexity of the ATLAS TDAQ system requires a comprehensive and flexible control system. Its role ranges from the so-called run-control, e.g. starting and stopping the datataking, to error handling and fault tolerance. It also includes intialisation and verification of the overall system. Following the traditional approach a hierachical system of...
    Go to contribution page
  22. G. Watts (UNIVERSITY OF WASHINGTON)
    29/09/2004, 17:30
    Track 1 - Online Computing
    oral presentation
    The DZERO Collider Expermient logs many of its Data Aquisition Monitoring Information in long term storage. This information is most frequently used to understand shift history and efficiency. Approximately two kilobytes of information is stored every 15 second. We describe this system and the web interface provided. The current system is distributed, running on Linux for the back end...
    Go to contribution page
  23. L. Abadie (CERN)
    29/09/2004, 17:50
    Track 1 - Online Computing
    oral presentation
    The aim of the LHCb configuration database is to store all the controllable devices of the detector. The experimentโ€™s control system (that uses PVSS) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to rapidly store and retrieve huge amounts...
    Go to contribution page
  24. T.M. Steinbeck (KIRCHHOFF INSTITUTE OF PHYSICS, RUPRECHT-KARLS-UNIVERSITY HEIDELBERG, for the Alice Collaboration)
    29/09/2004, 18:10
    Track 1 - Online Computing
    oral presentation
    The Alice High Level Trigger (HLT) cluster is foreseen to consist of 400 to 500 dual SMP PCs at the start-up of the experiment. The software running on these PCs will consist of components communicating via a defined interface, allowing flexible software configurations. During Alice's operation the HLT has to be continuously active to avoid detector dead time. To ensure that the...
    Go to contribution page
Building timetable...