Conveners
Track 1 – Online and Real-time Computing: Data acquisition (DAQ)
- Chunhua Li
Track 1 – Online and Real-time Computing: Monitoring and control systems
- Chunhua Li
Track 1 – Online and Real-time Computing: Trigger farms and networks
- Steven Schramm (Universite de Geneve (CH))
Track 1 – Online and Real-time Computing: Real-time analysis
- Steven Schramm (Universite de Geneve (CH))
Track 1 – Online and Real-time Computing: Detectors, performance, and analysis
- Yu Nakahama Higuchi (Nagoya University (JP))
Track 1 – Online and Real-time Computing: Hardware acceleration and hardware machine learning
- Jennifer Ngadiuba (CERN)
Track 1 – Online and Real-time Computing: Future upgrades
- Jennifer Ngadiuba (CERN)
Data acquisition systems (DAQ) for high energy physics experiments utilize complex FPGAs to handle unprecedented high data rates. This is especially true in the first stages of the processing chain. Developing and commissioning these systems becomes more complex as additional processing intelligence is placed closer to the detector, in a distributed way directly on the ATCA blades, in the...
The data acquisition (DAQ) software for most applications in high energy physics is composed of common building blocks, such as a networking layer, plug-in loading, configuration, and process management. These are often re-invented and developed from scratch for each project or experiment around specific needs. In some cases, time and available resources can be limited and make development...
The DAQ system of ProtoDUNE-SP successfully proved its design principles and met the requirements of the beam run of 2018. The technical design of the DAQ system for the DUNE experiment has major differences compared to the prototype due to different requirements and the environment. The single-phase prototype in CERN is the major integration facility for R&D aspects of the DUNE DAQ system....
After the current LHC shutdown (2019-2021), the ATLAS experiment will be required to operate in an increasingly harsh collision environment. To maintain physics performance, the ATLAS experiment will undergo a series of upgrades during the shutdown. A key goal of this upgrade is to improve the capacity and flexibility of the detector readout system. To this end, the Front-End Link eXchange...
LHCb is one of the 4 experiments at the LHC accelerator at CERN. During the upgrade phase of the experiment, several new electronic boards and Front End chips that perform the data acquisition for the experiment will be added by the different sub-detectors. These new devices will be controlled and monitored via a system composed of GigaBit Transceiver (GBT) chips that manage the bi-directional...
The Project 8 collaboration seeks to measure, or more tightly bound, the mass of the electron antineutrino by applying a novel spectroscopy technique to precision measurement of the tritium beta-decay spectrum. For the current, lab-bench-scale, phase of the project a single digitizer produces 3.2 GB/s of raw data. An onboard FPGA uses digital down conversion to extract three 100 MHz wide...
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2020, which includes a new computing system called O² (Online-Offline). To ensure the efficient operation of the upgraded experiment and of its newly designed computing system, a reliable, high performance, and automated experiment control system is being developed. The...
The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at LHC is a complex system responsible for the data readout, event building and recording of accepted events. Its proper functioning plays a critical role in the data-taking efficiency of the CMS experiment. In order to ensure high availability and recover promptly in the event of hardware or software failure of...
The Information Service (IS) is an integral part of the Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The IS allows online publication of operational monitoring data, and it is used by all sub-systems and sub-detectors of the experiment to constantly monitor their hardware and software components including more than 25000...
The Belle II experiment features a substantial upgrade of the Belle detector and will operate at the SuperKEKB energy-asymmetric $e^+ e^-$ collider at KEK in Tuskuba, Japan. The accelerator successfully completed the first phase of commissioning in 2016 and the Belle II detector saw its first electron-positron collisions in April 2018. Belle II features a newly designed silicon vertex detector...
The LHCb high level trigger (HLT) is split in two stages. HLT1 is synchronous with collisions delivered by the LHC and writes its output to a local disk buffer, which is asynchronously processed by HLT2. Efficient monitoring of the data being processed by the application is crucial to promptly diagnose detector or software problems. HLT2 consists of approximately 50000 processes and 4000...
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2020, which includes a new computing system called O² (Online-Offline). The raw data input from the ALICE detectors will then increase a hundredfold, up to 3.4 TB/s. In order to cope with such a large amount of data, a new online-offline computing system, called O2, will...
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2020. The raw data input from the detector will then increase a hundredfold, up to 3.4 TB/s. In order to cope with such a large throughput, a new Online-Offline computing system, called O2, will be deployed.
The FLP servers (First Layer Processor) are the readout nodes...
We report on performance measurements and optimizations of the event-builder software for the CMS experiment at the CERN Large Hadron Collider (LHC). The CMS event builder collects event fragments from several hundred sources. It assembles them into complete events that are then handed to the High-Level Trigger (HLT) processes running on O(1000) computers. We use a test system with 16...
The Compressed Baryonic Matter (CBM) experiment is currently under construction at the GSI/FAIR accelerator facility in Darmstadt, Germany. In CBM, all event selection is performed in a large online processing system, the “First-level Event Selector” (FLES). The data are received from the self-triggered detectors at an input-stage computer farm designed for a data rate of 1 TByte/s. The...
The Belle II experiment is a new generation B-factory experiment at KEK in Japan aiming at the search for New Physics in a huge sample of B-meson dacays. The commissioning of accelerator and detector for the first physics run has been started from March this year. The Belle II High Level Trigger (HLT) is fully
working in the beam run. The HLT is now operated with 1600 cores clusterized in 5...
ALICE (A Large Ion Collider Experiment), one of the large LHC experiments, is currently undergoing a significant upgrade. Increase in data rates planned for LHC Run3, together with triggerless continuous readout operation, requires a new type of networking and data processing infrastructure.
The new ALICE O2 (online-offline) computing facility consists of two types of nodes: First Level...
The LHCb experiment will be upgraded in 2021 and a new trigger-less readout system will be implemented. In the upgraded system, both event building (EB) and event selection will be performed in software for every collision produced in every bunch-crossing of the LHC. In order to transport the full data rate of 32 Tb/s we will use state of the art off-the-shelf network technologies, e.g....
The CMS experiment will be upgraded for operation at the High-Luminosity LHC to maintain and extend its optimal physics performance under extreme pileup conditions. Upgrades will include an entirely new tracking system, supplemented by a track trigger processor capable of providing tracks at Level-1, as well as a high-granularity calorimeter in the endcap region. New front-end and back-end...
Within the FAIR Phase-0 program the fast algorithms of the FLES (First-Level Event Selection) package developed for the CBM experiment (FAIR/GSI, Germany) are adapted for online and offline processing in the STAR experiment (BNL, USA). Using the same algorithms creates a bridge between online and offline. This makes it possible to combine online and offline resources for data...
With the unprecedented high luminosity delivered by the LHC, detector readout and data storage limitations severely limit searches for processes with high-rate backgrounds. An example of such searches is those for mediators of the interactions between the Standard Model and dark matter, decaying to hadronic jets. Traditional signatures and data taking techniques limit these searches to masses...
The Australian Square Kilometre Array Pathfinder (ASKAP) is a
new generation 36-antenna 36-beam interferometer capable of producing
about 2.5 Gb/s of raw data. The data are streamed from the observatory
directly to the dedicated small cluster at the Pawsey HPC centre. The ingest
pipeline is a distributed real time software which runs on this cluster
and prepares the data for further...
The transverse feedback system in LHC provides turn-by-turn, bunch-by-bunch measurements of the beam transverse position with a submicrometer resolution from 16 pickups. This results in a 16 high-bandwidth data-streams (1Gbit/s each), which are sent through a digital signal processing chain to calculate the correction kicks which are then applied to the beam. These data-streams contain...
Development of the second generation JANA2 multi-threaded event processing framework is ongoing through an LDRD initiative grant at Jefferson Lab. The framework is designed to take full advantage of all cores on modern many-core compute nodes. JANA2 efficiently handles both traditional hardware triggered event data and streaming data in online triggerless environments. Development is being...
The detection of long-lived particles (LLPs) in high energy experiments are key for both the study of the Standard Model (SM) of particle physics and to search for new physics beyond it.
Many interesting decay modes involve strange particles with large lifetimes such as Ks or L0s. Exotic LLP are also predicted in many new theoretical models. The selection and reconstruction of LLPs produced...
Measurements involving rare B meson decays by the LHCb and Belle Collaborations have revealed a number of anomalous results. Collectively, these anomalies are generating significant interest in the community, as they may be interpreted as a first sign of new physics in the lepton flavour sector. In 2018, the CMS experiment recorded an unprecedented data set containing the unbiased decays of 10...
The CMS experiment at the LHC features the largest crystal electromagnetic calorimeter (ECAL) ever built. It consists of about 75000 scintillating lead tungstate crystals. The ECAL crystal energy response is fundamental for both triggering purposes and offline analysis. Due to the challenging LHC radiation environment, the response of both crystals and photodetectors to particles evolves with...
The SND is a non-magnetic detector deployed at the VEPP-2000 e+e- collider (BINP, Novosibirsk) for hadronic cross-section measurements in the center of mass energy region below 2 GeV. The important part of the detector is a three-layer hodoscopic electromagnetic calorimeter (EMC) based on NaI(Tl) counters. Until the recent EMC spectrometric channel upgrade, only the energy deposition...
The upcoming PANDA experiment is one of the major pillars of the future FAIR accelerator facility in Darmstadt, Germany. With its multipurpose detector and an antiproton beam with a momentum of up to 15 GeV/c, PANDA will be able to test QCD in the intermediate energy regime and shed light on important questions such as: Why is there a matter-antimatter asymmetry in the Universe?
Achieving its...
Events containing muons, electrons or photons in the final state are an important signature for many analyses being carried out at the Large Hadron Collider (LHC), including both standard model measurements and searches for new physics. To be able to study such events, it is required to have an efficient and well-understood trigger system. The ATLAS trigger consists of a hardware based system...
The CMS experiment has been designed with a two-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms running on the available computing resources,...
The CMS experiment at the LHC is designed to study a wide range of high energy physics phenomena. It employs a large all-silicon tracker within a 3.8 T magnetic solenoid, which allows precise measurements of transverse momentum (pT) and vertex position.
This tracking detector will be upgraded to coincide with the installation of the High-Luminosity LHC, which will provide up to about 10^35...
The Level-0 Muon Trigger system of the ATLAS experiment will undergo a full upgrade for HL-LHC to stand the challenging performances requested with the increasing instantaneous luminosity. The upgraded trigger system foresees to send RPC raw hit data to the off-detector trigger processors, where the trigger algorithms run on new generation of Field-Programmable Gate Arrays (FPGAs). The FPGA...
The Deep Underground Neutrino Experiment (DUNE) will be a world-class neutrino observatory and nucleon decay detector aiming to address some of the most fundamental questions in particle physics. With a modular liquid argon time-projection chamber (LArTPC) of 40 kt fiducial mass, the DUNE far detector will be able to reconstruct neutrino interactions with an unprecedented resolution. With no...
Artificial neural networks are becoming a standard tool for data analysis, but their potential remains yet to be widely used for hardware-level trigger applications. Nowadays, high-end FPGAs, as they are also often used in low-level hardware triggers, offer theoretically enough performance to allow for the inclusion of networks of considerable size into these system for the first time. This...
Machine learning is becoming ubiquitous across HEP. There is great potential to improve trigger and DAQ performance with it. However, the exploration of such techniques within the field in low latency/power FPGAs has just begun. We present hls4ml, a user-friendly software, based on High-Level Synthesis (HLS), designed to deploy network architectures on FPGAs. As a case study, we use hls4ml for...
Since 2018 several FAIR Phase 0 beamtimes have been operated at GSI, Darmstadt. Here the new challenging technologies for the upcoming FAIR facility shall be tested while various physics experiments are performed with the existing GSI accelerators. One of these challenges concerns the performance, reliability, and scalability of the experiment data storage. A new system for archiving the data...
The ATLAS experiment at CERN has started the construction of upgrades
for the "High Luminosity LHC", with collisions due to start in
2026. In order to deliver an order of magnitude more data than
previous LHC runs, 14 TeV protons will collide with an instantaneous
luminosity of up to 7.5 x 10e34 cm^-2s^-1, resulting in much higher pileup and
data rates than the current experiment was...
The L0TP+ initiative is aimed at the upgrade of the FPGA-based Level-0 Trigger Processor (L0TP) of the NA62 experiment at CERN for the post-LS2 data taking, which is expected to happen at 100% of nominal beam intensity. Although tests performed at the end of 2018 showed a substantial robustness of the L0TP system also at full beam intensity, just hinting at a firmware fix, there are several...
To cope with the enhanced luminosity at the Large Hadron Collider (LHC) in 2021, the ATLAS collaboration is planning a major detector upgrade to be installed during the Long shutdown 2 (LS2). As a part of this, the Level 1 trigger, based on calorimeter data, will be upgraded to exploit the fine granularity readout using a new system of Feature EXtractors (FEX) and a new Topological Processor...
The CMS experiment has been designed with a two-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger, a streamlined version of the CMS offline reconstruction software running on a computer farm. During its “Phase 2” the LHC will reach a luminosity of 7×10³⁴ cm⁻²s⁻¹ with a pileup of 200 collisions, integrating over 3000 fb⁻¹ over the...
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3) will provide an instantaneous luminosity of 7.5 1034 cm-2 s-1 (levelled), with a pileup of up to 200 interactions per bunch crossing. During LS3, the CMS Detector will undergo a major upgrade to prepare for the Phase-2 of the LHC physics program, starting around 2026. The upgraded CMS detector will be read out at an...