Conveners
T1 - Online computing: S1
- Adriana Telesca (CERN)
T1 - Online computing: S2
- Clara Gaspar (CERN)
T1 - Online computing: S3
- Ryosuke Itoh (KEK)
T1 - Online computing: S4
- Catrin Bernius (SLAC National Accelerator Laboratory (US))
T1 - Online computing: S5
- Catrin Bernius (SLAC National Accelerator Laboratory (US))
T1 - Online computing: S6
- Ryosuke Itoh (KEK)
T1 - Online computing: S7
- Adriana Telesca (CERN)
- Clara Gaspar (CERN)
Data AQuisition (DAQ) systems are a vital component of every experiment. The purpose of the underlying software of these systems is to coordinate all the hardware components and detector states, providing the means of data readout, triggering, online processing, persistence, user control and the routing of data. These tasks are made more challenging when also considering fault tolerance,...
Recently, a stability of Data Acquisition System (DAQ) has become a vital precondition for a successful data taking in high energy physics experiments. The intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN is designed to be able to readout data at the maximum rate of the experiment and running in a mode without any stops. DAQ systems fulfilling such...
The NA62 experiment looks for the extremely rare Kaon decay K+->pinunu and aims at measuring its branching ratio with a 10% accuracy.
In order to do so a very high intensity secondary beam from the CERN SPS is used to produce charged Kaons whose decay products are detected by many detectors installed along a 150m decay region.
The NA62 Data Acquisition system exploits a multilevel trigger...
The Trigger and DAQ (TDAQ) system of the ATLAS experiment is a complex
distributed computing system, composed of O(30000) of applications
running on a farm of computers. The system is operated by a crew of
operators on shift. An important aspect of operations is to minimize
the downtime of the system caused by runtime failures, such as human
errors, unawareness, miscommunication, etc.
The...
The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, provides these events to the high level-trigger running on a farm of 26000 cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances...
In 2019, the ATLAS experiment at CERN is planning an upgrade
in order to cope with the higher luminosity requirements. In this
upgrade, the installation of the new muon chambers for the end-cap
muon system will be carried out. Muon track reconstruction performance
can be improved, and fake triggers can be reduced. It is also
necessary to develop readout system of trigger data for the...
LHCb is one of the 4 experiments at the LHC accelerator at CERN, specialized in b-physics. During the next long shutdown period, the LHCb experiment will be upgraded to a trigger-less readout system with a full software trigger in order to be able to record data with a much higher instantaneous luminosity. To achieve this goal, the upgraded systems for trigger, timing and fast control (TFC)...
Data acquisition and control play an important role in science applications especially in modern Experiments of high energy physics (HEP). A comprehensive and efficient monitoring system is a vital part of any HEP experiment. In this paper we describe the software web-based framework which is currently used by CMD-3 Collaboration during data taking with the CMD-3 Detector at the VEPP-2000...
This paper presents the Detector Control System (DCS) that is being designed and implemented for the NP04 experiment at CERN. NP04, also known as protoDUNE Single Phase (SP), aims at validating the engineering processes and detector performance of a large LAr Time Projection Chamber in view of the DUNE experiment. The detector is under construction and will be operated on a tertiary beam of...
During the Run-2 of the Large Hadron Collider (LHC) the instantaneous luminosity exceeds the nominal value of 10^{34} cm^{−2} s^{−1} with a 25 ns bunch crossing period and the number of overlapping proton-proton interactions per bunch crossing increases up to about 80. These conditions pose a challenge to the trigger system of the experiments that has to control rates while keeping a good...
The LHCb experiment, one of the four operating in the LHC, will be enduring a major upgrade of its electronics during the third long shutdown period of the particle accelerator. One of the main objectives of the upgrade effort is to implement a 40MHz readout of collision data. For this purpose, the Front-End electronics will make extensive use of a radiation resistant chipset, the Gigabit...
The Electromagnetic Calorimeter (ECAL) is one of the sub-detectors of the Compact Muon Solenoid (CMS) experiment of the Large Hadron Collider (LHC) at CERN. Since more than 10 years, the ECAL Detector Control System (DCS) and the ECAL Safety System (ESS) have supported the experiment operation, contributing to its high availability and safety. The evolution of both systems to fulfill new...
The ALICE Experiment at CERN LHC (Large Hadron Collider) is under
preparation for a major upgrade that is scheduled to be deployed during Long
Shutdown 2 in 2019-2020 and that includes new computing systems, called O2
(Online-Offline).
To ensure the efficient operation of the upgraded experiment along with its
newly designed computing system, a reliable, high performance and automated
control...
The BESIII detector is a magnetic spectrometer operating at BEPCII, a
double-ring e+e- collider with center-of-mass energies between 2.0 and
4.6 GeV and a peak luminosity $10^{33}$ cm$^{-2}$ s$^{-1}$. The event rate
is about 4 kHz after the online event filter (L3 trigger) at J/$\psi$
peak.
The BESIII online data quality monitoring (DQM) system is used to
monitor the data and the detector in...
LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the trigerring process such that Online data are immediately available offline for physics analysis (Turbo...
In spring 2018 the SuperKEKB electron-positron collider at High Energy Accelerator Research Organization (KEK, Tsukuba, Japan) will deliver its first collisions to the Belle II experiment. The aim of Belle II is to collect a data sample 50 times larger than the previous generation of B-Factories taking advantage of the unprecedented SuperKEKB design luminosity of 8x10^35 cm^-2 s^-1. The Belle...
The calibration of the detector in almost real time is a key to the exploitation of the large data volumes at the LHC experiments. For this purpose the CMS collaboration deployed a complex machinery involving several components of the processing infrastructure and of the condition DB system. Accurate reconstruction of data start only once all the calibrations become available for consumption...
A key ingredient of the data taking strategy used by the LHCb experiment in Run-II is the novel real-time detector alignment and calibration. Data collected at the start of the fill are processed within minutes and used to update the alignment, while the calibration constants are evaluated hourly. This is one of the key elements which allow the reconstruction quality of the software trigger in...
The ALICE experiment at the Large Hadron Collider (LHC) at CERN is planned to be operated in a continuous data-taking mode in Run 3.This will allow to inspect data from all collisions at a rate of 50 kHz for Pb-Pb, giving access to rare physics signals embedded into a large background.
Based on experience with real-time reconstruction of particle trajectories and event properties in the ALICE...
The CMS experiment dedicates a significant effort to supervise the quality of its data, online and offline. A real-time data quality (DQ) monitoring is in place to spot and diagnose problems as promptly as possible to avoid data loss. The evaluation a posteriori of processed data is designed to categorize the data in term of their usability for physics analysis. These activities produce DQ...
The design and performance of the ATLAS Inner Detector (ID) trigger
algorithms running online on the High Level Trigger (HLT) processor
farm for 13 TeV LHC collision data with high pileup are discussed.
The HLT ID tracking is a vital component in all physics signatures
in the ATLAS trigger for the precise selection of the rare or
interesting events necessary for physics analysis...
Track reconstruction at the CMS experiment uses the Combinatorial Kalman Filter. The algorithm computation time scales exponentially with pile-up, which will pose a problem for the High Level Trigger at the High Luminosity LHC. FPGAs, which are already used extensively in hardware triggers, are becoming more widely used for compute acceleration. With a combination of high perfor- mance, energy...
In order to profit from the largely increased instantaneous luminosity provided by the accelerator in Run III (2021-2023), the upgraded LHCb detector will make usage of a fully software based trigger, with a real-time event reconstruction and selection performed at the bunch crossing rate of the LHC (~30 MHz). This assumption implies much tighter timing constraints for the event reconstruction...
Boosted Decision Trees are used extensively in offline analysis and reconstruction in high energy physics. The computation time of ensemble inference has previously prohibited their use in online reconstruction, whether at the software or hardware level. An implementation of BDT inference for FPGAs, targeting low latency by leveraging the platform’s enormous parallelism, is presented. Full...
PANDA is one of the main experiments of the future FAIR accelerator facility at Darmstadt. It utilizes an anti-proton beam with a momentum up to 15 GeV/c on a fixed proton or nuclear target to investigate the features of strong QCD.
The reconstruction of charged particle tracks is one of the most challenging aspects in the online and offline reconstruction of the data taken by PANDA. Several...
The ATLAS Fast TracKer (FTK) is a hardware based track finder for the ATLAS trigger infrastructure currently under installation and commissioning. FTK sits between the two layers of the current ATLAS trigger system, the hardware-based Level 1 Trigger and the CPU-based High-Level Trigger (HLT). It will provide full-event tracking to the HLT with a design latency of 100 µs at a 100 kHz event...
In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous read out of minimum bias Pb-Pb collisions.
The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage.
Many new challenges arise,...
We have entered the Noisy Intermediate-Scale Quantum Era. A plethora of quantum processor prototypes allow evaluation of potential of the Quantum Computing paradigm in applications to pressing computational problems of the future. Growing data input rates and detector resolution foreseen in High-Energy LHC (2030s) experiments expose the often high time and/or space complexity of classical...
The Belle II detector is currently commissioned for operation in early 2018. It is designed to record collision events with an instantaneous luminosity of up to 8⋅10^35 cm−2*s−1 which is delivered by the SuperKEKB collider in Tsukuba, Japan. Such a large luminosity is required to significantly improve the precision on measurements of B and D mesons and Tau lepton decays to probe for signs of...
The ATLAS Trigger system has been operating successfully during 2017, its excellent performance has been vital for the ATLAS physics program.
The trigger selection capabilities of the ATLAS detector have been significantly enhanced for Run-2 compared to Run-1, in order to cope with the higher event rates and with the large number of simultaneous interactions (pile-up). The improvements at...
The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility requires fast and efficient event reconstruction algorithms. CBM will be one of the first HEP experiments which works in a triggerless mode: data received in the DAQ from the detectors will not be associated with events by a hardware trigger anymore. All raw data within a given period of time will be collected...
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the standard model as well as searches for new physics beyond the standard model. Such precision measurements and searches require information-rich datasets with a statistical power that matches the high-luminosity provided by the Phase-2 upgrade of the...
We present an implementation of the ATLAS High Level Trigger (HLT)
that provides parallel execution of trigger algorithms within the
ATLAS multi-threaded software framework, AthenaMT. This development
will enable the HLT to meet future challenges from the evolution of
computing hardware and upgrades of the Large Hadron Collider (LHC) and
ATLAS Detector. During the LHC data-taking period...
The first LHCb upgrade will take data at an instantaneous luminosity of $2\times10^{33}\mathrm{cm}^{-2}s^{-1}$ starting in 2021. Due to the high rate of beauty and charm signals LHCb will read out the entire detector into a software trigger running on commodity hardware at the LHC collision frequency of 30 MHz. In this talk we present the challenges of triggering in the MHz signal era. We pay...
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of
strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron
Collider). During the second long shut-down of the LHC, the ALICE detector will be
upgraded to cope with an interaction rate of 50 kHz in Pb-Pb collisions, producing in the
online computing system (O2) a sustained input...
The liquid argon Time Projection Chamber technique has matured and is now in use by several short-baseline neutrino experiments. This technology will be used in the long-baseline DUNE experiment; however, this experiment represents a large increase in scale, which needs to be validated explicitly. To this end, both the single-phase and dual-phase technology are being tested at CERN, in two...
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events of 2 MB at a rate of 100 kHz. The event builder collects event fragments from about 740 sources and assembles them into complete events which are then handed to the high-level trigger (HLT) processes running on O(1000) computers. The aging event-building hardware will be replaced...
Data acquisition (DAQ) systems for high energy physics experiments readout data from a large number of electronic components, typically over thousands of point to point links. They are thus inherently distributed systems. Traditionally, an important stage in the data acquisition chain has always been the so called event building: data fragments coming from different sensors are identified as...
ALICE (A Large Ion Collider Experiment), one of the large LHC experiments, is undergoing a major upgrade during the next long shutdown. Increase in data rates planned for LHC Run3 (3TiB/s for Pb-Pb collisions) with triggerless continuous readout operation requires a paradigm shift in computing and networking infrastructure.
The new ALICE O2 (online-offline) computing facility consists of two...
The NA62 experiment at CERN SPS is aimed at measuring the branching ratio of the ultra-rare K+→π+νν decay.
This imposes very tight requirements on the particle identification capabilities of the apparatus in order to reject the considerable background.
To this purpose a centralized level 0 hardware trigger system (L0TP) processes in real-time the streams of data primitives coming from the...
ALICE Overwatch is a project started in late 2015 to provide augmented online monitoring and data quality assurance utilizing time-stamped QA histograms produced by the ALICE High Level Trigger (HLT). The system receives the data via ZeroMQ, storing it for later review, enriching it with detector specific functionality, and visualizing it via a web application. These provided capabilities are...
Unprecedented size and complexity of the ATLAS experiment required
adoption of a new approach for online monitoring system development as
many requirements for this system were not known in advance due to the
innovative nature of the project.
The ATLAS online monitoring facility has been designed as a modular
system consisting of a number of independent components, which can
interact with one...
The Compact Muon Solenoid (CMS) is one of the experiments at the CERN Large Hadron Collider (LHC). The CMS Online Monitoring system (OMS) is an upgrade and successor to the CMS Web-Based Monitoring (WBM) system, which is an essential tool for shift crew members, detector subsystem experts, operations coordinators, and those performing physics analyses. CMS OMS is divided into aggregation and...
Control and monitoring of experimental facilities as well as laboratory equipment requires handling a blend of different tasks. Often in industrial or scientific fields there are standards or form factor to comply with and electronic interfaces or custom busses to adopt. With such tight boundary conditions, the integration of an off-the-shelf Single Board Computer (SBC) is not always a...
The current scientific environment has experimentalists and system administrators allocating large amounts of time for data access, parsing and gathering
as well as instrument management. This is a growing challenge with more large
collaborations with significant amount of instrument resources, remote instrumentation sites and continuously improved and upgraded scientific...
The part of the CMS data acquisition (DAQ) system responsible for data readout and event building is a complex network of interdependent distributed programs. To ensure successful data taking, these programs have to be constantly monitored in order to facilitate the timeliness of necessary corrections in case of any deviation from specified behaviour. A large number of diverse monitoring data...