21โ€“27 Mar 2009
Prague
Europe/Prague timezone

Session

Online Computing

Online Computing
23 Mar 2009, 14:00
Prague

Prague

Prague Congress Centre 5. kvฤ›tna 65, 140 00 Prague 4, Czech Republic

Conveners

Online Computing: Monday

  • Wainer Vandelli (INFN)

Online Computing: Monday

  • Pierre Vande Vyvre (CERN)

Online Computing: Tuesday

  • Gordon Watts (Washington University)

Online Computing: Tuesday

  • Rainer Mankel (CERN)

Online Computing: Thursday

  • Clara Gaspar (CERN)

Description

Sponsored by ACEOLE

Presentation materials

There are no materials yet.

  1. Dr Johannes Gutleber (CERN)
    23/03/2009, 14:00
    Online Computing
    oral
    The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data flow in the currently operating experiment. The central DAQ system relies heavily on industry standard networks and processing equipment. Adopting a single software infrastructure in all...
    Go to contribution page
  2. Werner Wiedenmann (University of Wisconsin)
    23/03/2009, 14:20
    Online Computing
    oral
    Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and...
    Go to contribution page
  3. Mr SooHyung Lee (Korea Univ.)
    23/03/2009, 14:40
    Online Computing
    oral
    The real time data analysis at next generation experiments is a challenge because of their enormous data rate and size. The SuperKEKB experiment, the upgraded Belle experiment, requires to process 100 times larger data of current one taken at 10kHz. The offline-level data analysis is necessary in the HLT farm for the efficient data reduction. The real time processing of huge data is also...
    Go to contribution page
  4. Dr Clara Gaspar (CERN)
    23/03/2009, 15:00
    Online Computing
    oral
    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services...
    Go to contribution page
  5. Ms Chiara Zampolli (CERN)
    23/03/2009, 15:20
    Online Computing
    oral
    The ALICE experiment is the dedicated heavy-ion experiment at the CERN LHC and will take data with a bandwidth of up to 1.25 GB/s. It consists of 18 subdetectors that interact with five online systems (DAQ, DCS, ECS, HLT and Trigger). Data recorded are read out by DAQ in a raw data stream produced by the subdetectors. In addition the subdetectors produce conditions data derived from the raw...
    Go to contribution page
  6. Prof. Gordon Watts (UNIVERSITY OF WASHINGTON)
    23/03/2009, 15:40
    Online Computing
    oral
    The DZERO Level 3 Trigger and data acquisition system has been successfully running since March of 2001, taking data for the DZERO experiment located at the Tevatron at the Fermi National Laboratory. Based on a commodity parts, it reads out 65 VME front end crates and delivers the 250 MB of data to one of 1200 processing cores for a high level trigger decision at a rate of 1 kHz. Accepted...
    Go to contribution page
  7. Daniel Sonnick (University of Applied Sciences Kaiserslautern)
    23/03/2009, 16:30
    Online Computing
    oral
    In LHCb raw data files are created on a high-performance storage system using a custom, speed-optimized file-writing software. The file-writing is orchestrated by a data-base, which represents the life-cycle of a file and is the entry point for all operations related to files such as run-start, run-stop, file-migration, file-pinning and ultimately file-deletion. File copying to the...
    Go to contribution page
  8. Mr Matteo Marone (Universita degli Studi di Torino - Universita & INFN, Torino)
    23/03/2009, 16:50
    Online Computing
    oral
    The CMS detector at LHC is equipped with a high precision lead tungstate crystal electromagnetic calorimeter (ECAL). The front-end boards and the photodetectors are monitored using a network of DCU (Detector Control Unit) chips located on the detector electronics. The DCU data are accessible through token rings controlled by an XDAQ based software component. Relevant parameters are...
    Go to contribution page
  9. Mr Barthรฉlรฉmy von Haller (CERN)
    23/03/2009, 17:10
    Online Computing
    oral
    ALICE is one of the four experiments installed at the CERN Large Hadron Collider (LHC), especially designed for the study of heavy-ion collisions. The online Data Quality Monitoring (DQM) is an important part of the data acquisition (DAQ) software. It involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper presents the final...
    Go to contribution page
  10. Dr Hannes Sakulin (European Organization for Nuclear Research (CERN))
    23/03/2009, 17:30
    Online Computing
    oral
    The CMS Data Acquisition cluster, which runs around 10000 applications, is configured dynamically at run time. XML configuration documents determine what applications are executed on each node and over what networks these applications communicate. Through this mechanism the DAQ System may be adapted to the required performance, partitioned in order to perform (test-) runs in parallel, or...
    Go to contribution page
  11. Giovanni Polese (Lappeenranta Univ. of Technology)
    23/03/2009, 17:50
    Online Computing
    oral
    The Resistive Plate Chamber system is composed by 912 double-gap chambers equipped with about 10^4 frontend boards. The correct and safe operation of the RPC system requires a sophisticated and complex online Detector Control System, able to monitor and control 10^4 hardware devices distributed on an area of about 5000 m^2. The RPC DCS acquires, monitors and stores about 10^5 parameters...
    Go to contribution page
  12. Alina Corso-Radu (University of California, Irvine)
    23/03/2009, 18:10
    Online Computing
    oral
    ATLAS is one of the four experiments in the Large Hadron Collider (LHC) at CERN which has been put in operation this year. The challenging experimental environment and the extreme detector complexity required development of a highly scalable distributed monitoring framework, which is currently being used to monitor the quality of the data being taken as well as operational conditions of the...
    Go to contribution page
  13. Albert Puig Navarro (Universidad de Barcelona), Markus Frank (CERN)
    24/03/2009, 14:40
    Online Computing
    oral
    The LHCb experiment at the LHC accelerator at CERN will collide particle bunches at 40 MHz. After a first level of hardware trigger with output at 1 MHz, the physically interesting collisions will be selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. It consists of up to roughly 16000 CPU cores and 44TB of storage space. Although limited by...
    Go to contribution page
  14. Dr Alessandro Di Mattia (MSU)
    24/03/2009, 15:00
    Online Computing
    oral
    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 10^34 cm-2s-1 it will need to achieve a rejection factor of the order of 10^-7 against random proton-proton interactions, while selecting with high efficiency events that are...
    Go to contribution page
  15. Dr Silvia Amerio (University of Padova & INFN Padova)
    24/03/2009, 15:20
    Online Computing
    oral
    The Silicon-Vertex-Trigger (SVT) is a processor developed at CDF experiment to perform online fast and precise track reconstruction. SVT is made of two pipelined processors, the Associative Memory, finding low precision tracks, and the Track Fitter, refining the track quality with high precision fits. We will describe the architecture and the performances of a next generation track fitter,...
    Go to contribution page
  16. Vasco Chibante Barroso (CERN)
    24/03/2009, 15:40
    Online Computing
    oral
    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Some specific calibration tasks are performed regularly for each of the 18 ALICE sub-detectors in order to achieve most accurate physics measurements. These procedures involve events analysis in a wide...
    Go to contribution page
  17. Dr Hans G. Essel (GSI)
    24/03/2009, 16:30
    Online Computing
    oral
    For the new experiments at FAIR new concepts of data acquisition systems have to be developed like the distribution of self-triggered, time stamped data streams over high performance networks for event building. The Data Acquisition Backbone Core (DABC) is a general purpose software framework designed for the implementation of such data acquisition systems. It is based on C++ and...
    Go to contribution page
  18. Dr Mark Sutton (University of Sheffield)
    24/03/2009, 16:50
    Online Computing
    oral
    The ATLAS experiment is one of two general-purpose experiments at the Large Hadron Collider (LHC). It has a three-level trigger, designed to reduce the 40MHz bunch-crossing rate to about 200Hz for recording. Online track reconstruction, an essential ingredient to achieve this design goal, is performed at the software-based second (L2) and third levels (Event Filter, EF), running on farms of...
    Go to contribution page
  19. Belmiro Pinto (Universidade de Lisboa)
    24/03/2009, 17:10
    Online Computing
    oral
    The ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model, which embraces Grid concepts. The output coming from the Event Filter will consist of three main streams: a primary stream, the express stream and...
    Go to contribution page
  20. Mr Pablo Martinez Ruiz Del Arbol (Instituto de Fรญsica de Cantabria)
    24/03/2009, 17:30
    Online Computing
    oral
    The alignment of the Muon System of CMS is performed using different techniques: photogrammetry measurements, optical alignment and alignment with tracks. For track-based alignment, several methods are employed, ranging from a hit-impact point (HIP) algorithm and a procedure exploiting chamber overlaps to a global fit method based on the Millepede approach. For start-up alignment, cosmic muon...
    Go to contribution page
  21. Jean-Christophe Garnier (Conseil Europeen Recherche Nucl. (CERN)-Unknown-Unknown)
    24/03/2009, 17:50
    Online Computing
    oral
    The High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are consolidated into files on an onsite storage and then sent to permanent storage for subsequent analysis on the Grid. For local and full-chain tests a method to exercise the data-flow through the High Level Trigger when there are no actual data is needed....
    Go to contribution page
  22. Mr Bjorn (on behalf of the ATLAS Tile Calorimeter system) Nordkvist (Stockholm University)
    24/03/2009, 18:10
    Online Computing
    oral
    The ATLAS Tile Calorimeter is ready for data taking during the proton-proton collisions provided by the Large Hadron Collider (LHC). The Tile Calorimeter is a sampling calorimeter with iron absorbers and scintillators as active medium. The scintillators are read out by wave length shifting fibers and PMTs. The LHC provides collisions every 25ns, putting very stringent requirements on the...
    Go to contribution page
  23. Mr Pierre VANDE VYVRE (CERN)
    26/03/2009, 16:30
    Online Computing
    oral
    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time available per year for heavy ion and to...
    Go to contribution page
  24. Dr Jose Antonio Coarasa Perez (Department of Physics - Univ. of California at San Diego (UCSD) and CERN, Geneva, Switzerland)
    26/03/2009, 16:50
    Online Computing
    oral
    The CMS online cluster consists of more than 2000 computers, mostly under Scientific Linux CERN, running the 10000 applications instances responsible for the data acquisition and experiment control on a 24/7 basis. The challenging dimension of the cluster constrained the design and implementation of the infrastructure: - The critical nature of the control applications demands a tight...
    Go to contribution page
  25. Mr Thilo Pauly (CERN)
    26/03/2009, 17:10
    Online Computing
    oral
    The ATLAS Level-1 Central Trigger (L1CT) electronics is a central part of ATLAS data-taking. It receives the 40 MHz bunch clock from the LHC machine and distributes it to all sub-detectors. It initiates the detector read-out by forming the Level-1 Accept decision, which is based on information from the calorimeter and muon trigger processors, plus a variety of additional trigger inputs from...
    Go to contribution page
  26. Yoshiji Yasu (High Energy Accelerator Research Organization (KEK))
    26/03/2009, 17:30
    Online Computing
    oral
    DAQ-Middleware is a software framework of network-distributed DAQ system based on Robot Technology Middleware, which is an international standard of Object Management Group (OMG) in Robotics and developed by AIST. DAQ-Component is a software unit of DAQ Middleware. Basic components are already developed. For examples, Gatherer is a readout component, Logger is a logging component, Monitor is...
    Go to contribution page
  27. Giovanni Organtini (Univ. + INFN Roma 1)
    26/03/2009, 17:50
    Online Computing
    oral
    The Electromagnetic Calorimeter (ECAL) of the CMS experiment at the LHC is made of about 75000 scintillating crystals. The detector properties must be continuously monitored in order to ensure the extreme stability and precision required by its design. This leads to a very large volume of non-event data to be accessed continuously by shifters, experts, automatic monitoring tasks,...
    Go to contribution page
  28. Dr Ivan Kisel (GSI Helmholtzzentrum fรผr Schwerionenforschung GmbH, Darmstadt)
    26/03/2009, 18:10
    Online Computing
    oral
    The CBM Collaboration builds a dedicated heavy-ion experiment to investigate the properties of highly compressed baryonic matter as it is produced in nucleus-nucleus collisions at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany. This requires the collection of a huge number of events which can only be obtained by very high reaction rates and long data taking periods....
    Go to contribution page
Building timetable...