21โ€“25 May 2012
New York City, NY, USA
US/Eastern timezone

Session

Online Computing

01
21 May 2012, 13:30
New York City, NY, USA

New York City, NY, USA

Conveners

Online Computing

  • Niko Neufeld (CERN)

Online Computing

  • Sylvain Chapeland (CERN)

Online Computing

  • Sylvain Chapeland (CERN)

Online Computing

  • Remi Mommsen (Fermi National Accelerator Lab. (US))

Presentation materials

There are no materials yet.

  1. Mr Vasco Chibante Barroso (CERN)
    21/05/2012, 13:30
    Online Computing (track 1)
    Parallel
    A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Since its successful start-up in 2010, the LHC has been performing outstandingly, providing to the experiments long periods of stable collisions and an integrated luminosity that greatly exceeds the...
    Go to contribution page
  2. David Michael Rohr (Johann-Wolfgang-Goethe Univ. (DE))
    21/05/2012, 13:55
    Online Computing (track 1)
    Parallel
    The ALICE High Level Trigger (HLT) is capable of performing an online reconstruction of heavy-ion collisions. The reconstruction of particle trajectories in the Time Projection Chamber (TPC) is the most compute intensive step. The TPC online tracker implementation combines the principle of the cellular automaton and the Kalman filter. It has been accelerated by the usage of graphics cards...
    Go to contribution page
  3. Diego Casadei (New York University (US))
    21/05/2012, 14:20
    Online Computing (track 1)
    Parallel
    The ATLAS trigger has been used very successfully to collect collision data during 2009-2011 LHC running at centre of mass energies between 900 GeV and 7 TeV. The three-level trigger system reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 300 Hz. The first level uses custom electronics to reject most background collisions, in less...
    Go to contribution page
  4. Dr Giuseppe Avolio (University of California Irvine (US))
    21/05/2012, 14:45
    Online Computing (track 1)
    Parallel
    The Trigger and DAQ (TDAQ) system of the ATLAS experiment is a very complex distributed computing system, composed of O(10000) of applications running on more than 2000 computers. The TDAQ Controls system has to guarantee the smooth and synchronous operations of all TDAQ components and has to provide the means to minimize the downtime of the system caused by runtime failures, which are...
    Go to contribution page
  5. Andrea Negri (Universita e INFN (IT))
    21/05/2012, 15:10
    Online Computing (track 1)
    Parallel
    The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data at unprecedented energy and rates. The TDAQ is composed of three levels which reduces the event rate from the design bunch-crossing rate of 40 MHz to an average event recording rate of about 200 Hz. The...
    Go to contribution page
  6. Hannes Sakulin (CERN)
    21/05/2012, 16:35
    Online Computing (track 1)
    Parallel
    The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100...
    Go to contribution page
  7. Andrea Petrucci (CERN)
    21/05/2012, 17:00
    Online Computing (track 1)
    Parallel
    The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC restarts after the 2013/14 shut-down, the current compute nodes and networking infrastructure will have reached the end of their lifetime. We are presenting design studies for an...
    Go to contribution page
  8. Robert Gomez-Reino Garrido (CERN)
    21/05/2012, 17:25
    Online Computing (track 1)
    Parallel
    The Compact Muon Solenoid (CMS) is a CERN multi-purpose experiment that exploits the physics of the Large Hadron Collider (LHC). The Detector Control System (DCS) ensures a safe, correct and efficient experiment operation, contributing to the recording of high quality physics data. The DCS is programmed to automatically react to the LHC changes. CMS sub-detectorโ€™s bias voltages are set...
    Go to contribution page
  9. Mariusz Witek (Polish Academy of Sciences (PL))
    21/05/2012, 17:50
    Online Computing (track 1)
    Parallel
    The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but disk space limitations mean that only 3 kHz can be written to tape for offline processing. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. In contrast to...
    Go to contribution page
  10. Andrew Norman (Fermilab)
    22/05/2012, 16:35
    Online Computing (track 1)
    Parallel
    The NOvA experiment at Fermi National Accelerator Lab, has been designed and optimized to perform a suite of measurements critical to our understanding of the neutrinoโ€™s properties, their oscillations and their interactions. NOvA presents a unique set of data acquisition and computing challenges due to the immense size of the detectors, the data volumes that are generated through the...
    Go to contribution page
  11. Qiming Lu (Fermi National Accelerator Laboratory)
    22/05/2012, 17:00
    Online Computing (track 1)
    Parallel
    A complex running system, such as the NOvA online data acquisition, consists of a large number of distributed but closely interacting components. This paper describes a generic realtime correlation analysis and event identification engine, named Message Analyzer. Its purpose is to capture run time abnormalities and recognize system failures based on log messages from participating components....
    Go to contribution page
  12. Matt Toups (Columbia University)
    22/05/2012, 17:25
    Online Computing (track 1)
    Parallel
    The Double Chooz (DC) reactor anti-neutrino experiment consists of a neutrino detector and a large area Outer Veto detector. A custom data-acquisition (DAQ) system written in Ada language for all the sub-detector in the neutrino detector systems and a generic object oriented data acquisition system for the Outer Veto detector were developed. Generic object-oriented programming was also used to...
    Go to contribution page
  13. Linda Coney (University of California, Riverside)
    22/05/2012, 17:50
    Online Computing (track 1)
    Parallel
    The Muon Ionization Cooling Experiment (MICE) is designed to test transverse cooling of a muon beam, demonstrating an important step along the path toward creating future high intensity muon beam facilities. Protons in the ISIS synchrotron impact a titanium target, producing pions which decay into muons that propagate through the beam line to the MICE cooling channel. Along the beam line,...
    Go to contribution page
  14. Dr William Badgett (Fermilab)
    24/05/2012, 16:35
    Online Computing (track 1)
    Parallel
    The CDF Collider Detector at Fermilab ceased data collection on September 30, 2011 after over twenty five years of operation. We review the performance of the CDF Run II data acquisition systems over the last ten of these years while recording nearly 10 fb-1 of proton-antiproton collisions with a high degree of efficiency. Technology choices in the online control and configuration systems...
    Go to contribution page
  15. Gordon Watts (University of Washington (US))
    24/05/2012, 17:00
    Online Computing (track 1)
    Parallel
    The Tevatron Collider, located at the Fermi National Accelerator Laboratory, delivered its last 1.96 TeV proton-antiproton collisions on September 30th, 2011. The DZERO experiment continues to take cosmic data for final alignment for several more months . Since Run 2 started, in March 2001, all DZERO data has been collected by the DZERO Level 3 Trigger/DAQ System. The system is a modern,...
    Go to contribution page
  16. Dr Krzysztof Korcyl (Polish Academy of Sciences (PL))
    24/05/2012, 17:25
    Online Computing (track 1)
    Parallel
    A novel architecture is being proposed for the data acquisition and trigger system for PANDA experiment at the HESR facility at FAIR/GSI. The experiment will run without the hardware trigger signal and use timestamps to correlate detector data from a given time window. The broad physics program in combination with high rate of 2 10^7 interactions require very selective filtering...
    Go to contribution page
  17. Dr Dirk Hoffmann (Universite d'Aix - Marseille II (FR))
    24/05/2012, 17:50
    Online Computing (track 1)
    Parallel
    We present the prototyping of a 10Gigabit-Ethernet based UDP data acquisition (DAQ) system that has been conceived in the context of the Array and Control group of CTA (Cherenkov Telescope Array). The CTA consortium plans to build the next generation ground-based gamma-ray instrument, with approximately 100 telescopes of at least three different sizes installed on two sites. The genuine camera...
    Go to contribution page
Building timetable...