Conveners
Track 1: Online Computing: 1.1
- Frank Winklmeier (University of Oregon (US))
Track 1: Online Computing: 1.2
- Gene Van Buren (Brookhaven National Laboratory)
Track 1: Online Computing: 1.3
- Tim Martin (University of Warwick (GB))
Track 1: Online Computing: 1.4
- Simon George (Royal Holloway, University of London)
Track 1: Online Computing: 1.5
- Sylvain Chapeland (CERN)
Track 1: Online Computing: 1.6
- Christian Faerber (CERN)
Track 1: Online Computing: 1.7
- Jason Webb (Brookhaven National Lab)
-
Alexander Bogdanchikov (Budker Institute of Nuclear Physics (RU))10/10/2016, 11:00Track 1: Online ComputingOral
The SND detector takes data at the e+e- collider VEPP-2000 in Novosibirsk. We present here
Go to contribution page
recent upgrades of the SND DAQ system which are mainly aimed to handle the enhanced events
rate load after the collider modernization. To maintain acceptable events selection quality the electronics
throughput and computational power should be increased. These goals are achieved with the new fast... -
10/10/2016, 11:15Track 1: Online ComputingOral
The Cherenkov Telescope Array (CTA) will be the next generation ground-based gamma-ray observatory. It will be made up of approximately 100 telescopes of three different sizes, from 4 to 23 meters in diameter. The previously presented prototype of a high speed data acquisition (DAQ) system for CTA (CHEP 2012) has become concrete within the NectarCAM project, one of the most challenging camera...
Go to contribution page -
Imma Riu (IFAE Barcelona (ES))10/10/2016, 11:30Track 1: Online ComputingOral
The LHC will collide protons in the ATLAS detector with increasing luminosity through 2016, placing stringent operational and physical requirements to the ATLAS trigger system in order to reduce the 40 MHz collision rate to a manageable event storage rate of about 1 kHz, while not rejecting interesting physics events. The Level-1 trigger is the first rate-reducing step in the ATLAS trigger...
Go to contribution page -
Mikolaj Krzewicki (Johann-Wolfgang-Goethe Univ. (DE))10/10/2016, 11:45Track 1: Online ComputingOral
ALICE HLT Run2 performance overview
M.Krzewicki for the ALICE collaboration
The ALICE High Level Trigger (HLT) is an online reconstruction and data compression system used in the ALICE experiment at CERN. Unique among the LHC experiments, it extensively uses modern coprocessor technologies like general purpose graphic processing units (GPGPU) and field programmable gate arrays (FPGA) in the...
Go to contribution page -
David Rohr (Johann-Wolfgang-Goethe Univ. (DE))10/10/2016, 12:00Track 1: Online ComputingOral
The ALICE HLT uses a data transport framework based on the publisher subscriber message principle, which transparently handles the communication between processing components over the network and between processing components on the same node via shared memory with a zero copy approach.
Go to contribution page
We present an analysis of the performance in terms of maximum achievable data rates and event rates as well... -
Gerhard Raven (Nikhef National institute for subatomic physics (NL))10/10/2016, 12:15Track 1: Online ComputingOral
The LHCb software trigger underwent a paradigm shift before the start of Run-II. From being a system to select events for later offline reconstruction, it can now perform the event analysis in real-time, and subsequently decide which part of the event information is stored for later analysis.
The new strategy is only possible due to a major upgrade during the LHC long shutdown I (2012-2015)....
Go to contribution page -
John Freeman (Fermi National Accelerator Lab. (US))10/10/2016, 14:00Track 1: Online ComputingOral
For a few years now, the artdaq data acquisition software toolkit has
Go to contribution page
provided numerous experiments with ready-to-use components which allow
for rapid development and deployment of DAQ systems. Developed within
the Fermilab Scientific Computing Division, artdaq provides data
transfer, event building, run control, and event analysis
functionality. This latter feature includes built-in... -
Remi Mommsen (Fermi National Accelerator Lab. (US))10/10/2016, 14:15Track 1: Online ComputingOral
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s to the high-level trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at a rate of around 1 kHz.
Go to contribution page
The DAQ system has been redesigned during the... -
Mikolaj Krzewicki (Johann-Wolfgang-Goethe Univ. (DE))10/10/2016, 14:30Track 1: Online ComputingOral
Support for Online Calibration in the ALICE HLT Framework
Mikolaj Krzewicki, for the ALICE collaboration
ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online compute farm, which reconstructs events measured by the ALICE detector in real-time. The HLT uses a custom online...
Go to contribution page -
Maurizio Martinelli (Ecole Polytechnique Federale de Lausanne (CH))10/10/2016, 14:45Track 1: Online ComputingOral
LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and...
Go to contribution page -
Piotr Karol Oramus (AGH University of Science and Technology (PL))10/10/2016, 15:00Track 1: Online ComputingOral
The exploitation of the full physics potential of the LHC experiments requires fast and efficient processing of the largest possible dataset with the most refined understanding of the detector conditions. To face this challenge, the CMS collaboration has setup an infrastructure for the continuous unattended computation of the alignment and calibration constants, allowing for a refined...
Go to contribution page -
10/10/2016, 15:15Track 1: Online ComputingOral
The SuperKEKB $\mathrm{e^{+}\mkern-9mu-\mkern-1mue^{-}}$collider
Go to contribution page
has now completed its first turns. The planned running luminosity
is 40 times higher than its previous record during the KEKB operation.
The Belle II detector placed at the interaction point will acquire
a data sample 50 times larger than its predecessor. The monetary and
time costs associated with storing and processing... -
Tim Martin (University of Warwick (GB))10/10/2016, 15:30Track 1: Online ComputingOral
The ATLAS High Level Trigger Farm consists of around 30,000 CPU cores which filter events at up to 100 kHz input rate.
Go to contribution page
A costing framework is built into the high level trigger, this enables detailed monitoring of the system and allows for data-driven predictions to be made
utilising specialist datasets. This talk will present an overview of how ATLAS collects in-situ monitoring data on both... -
Hannes Sakulin (CERN)10/10/2016, 15:45Track 1: Online ComputingOral
The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN is a distributed Java web application running on Apache Tomcat servers. During Run-1 of the LHC, many operational procedures have been automated. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters....
Go to contribution page -
Emilio Meschi (CERN)11/10/2016, 11:00Track 1: Online ComputingOral
In Long Shutdown 3 the CMS Detector will undergo a major upgrade to prepare for the second phase of the LHC physics program, starting around 2026. The HL-LHC upgrade will bring instantaneous luminosity up to 5x10^34 cm-2 s-1 (levelled), at the price of extreme pileup of 200 interactions per crossing. A new silicon tracker with trigger capabilities and extended coverage, and new high...
Go to contribution page -
Simon George (Royal Holloway, University of London)11/10/2016, 11:15Track 1: Online ComputingOral
The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of 7.5 × 1034 cm−2s−1, resulting in much higher pileup and data rates than the current experiment was designed to...
Go to contribution page -
Soo Ryu (Argonne National Laboratory (US))11/10/2016, 11:30Track 1: Online ComputingOral
After the Phase-I upgrade and onward, the Front-End Link eXchange (FELIX) system will be the interface between the data handling system and the detector front-end electronics and trigger electronics at the ATLAS experiment. FELIX will function as a router between custom serial links and a commodity switch network which will use standard technologies (Ethernet or Infiniband) to communicate with...
Go to contribution page -
Filippo Costa (CERN)11/10/2016, 11:45Track 1: Online ComputingOral
ALICE, the general purpose, heavy ion collision detector at the CERN LHC is designed
Go to contribution page
to study the physics of strongly interacting matter using proton-proton, nucleus-nucleus and proton-nucleus collisions at high energies. The ALICE experiment will be
upgraded during the Long Shutdown 2 in order to exploit the full scientific potential of the future LHC. The requirements will then be... -
Matthias Richter (University of Oslo (NO))11/10/2016, 12:00Track 1: Online ComputingOral
The ALICE Collaboration and the ALICE O$^2$ project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects of the data handling concept are partial reconstruction of raw data organized in so called time frames, and based on that information reduction of the data rate without...
Go to contribution page -
Matteo Manzali (Universita di Ferrara & INFN (IT))11/10/2016, 12:15Track 1: Online ComputingOral
The LHCb experiment will undergo a major upgrade during the second long shutdown (2018 - 2019). The upgrade will concern both the detector and the Data Acquisition (DAQ) system, to be rebuilt in order to optimally exploit the foreseen higher event rate. The Event Builder (EB) is the key component of the DAQ system which gathers data from the sub-detectors and build up the whole event. The EB...
Go to contribution page -
458. Implementation of the ATLAS trigger within the ATLAS MultiThreaded Software Framework AthenaMTBenjamin Michael Wynne (University of Edinburgh (GB))11/10/2016, 14:00Track 1: Online ComputingOral
We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC...
Go to contribution page -
Benedict Allbrooke (University of Sussex (GB))11/10/2016, 14:15Track 1: Online ComputingOral
The ATLAS experiment at the high-luminosity LHC will face a five-fold
increase in the number of interactions per collision relative to the ongoing
Run 2. This will require a proportional improvement in rejection power at
the earliest levels of the detector trigger system, while preserving good signal efficiency.One critical aspect of this improvement will be the implementation of
Go to contribution page
precise... -
Kristian Hahn (Northwestern University (US)), Marco Trovato (Northwestern University (US))11/10/2016, 14:30Track 1: Online ComputingOral
The High Luminosity LHC (HL-LHC) will deliver luminosities of up to 5x10^34 cm^2/s, with an average of about 140-200 overlapping proton-proton collisions per bunch crossing. These extreme pileup conditions can significantly degrade the ability of trigger systems to cope with the resulting event rates. A key component of the HL-LHC upgrade of the CMS experiment is a Level-1 (L1) track...
Go to contribution page -
Mrs Lucie Flekova (Technical University of Darmstadt)11/10/2016, 14:45Track 1: Online ComputingOral
Micropattern gaseous detector (MPGD) technologies, such as GEMs or MicroMegas, are particularly suitable for precision tracking and triggering in high rate environments. Given their relatively low production costs, MPGDs are an exemplary candidate for the next generation of particle detectors. Having acknowledged these advantages, both the ATLAS and CMS collaborations at the LHC are exploiting...
Go to contribution page -
11/10/2016, 15:00Track 1: Online ComputingOral
The Compressed Baryonic Matter (CBM) experiment is currently under construction at the upcoming FAIR accelerator facility in Darmstadt, Germany. Searching for rare probes, the experiment requires complex online event selection criteria at a high event rate.
To achieve this, all event selection is performed in a large online processing farm of several hundred nodes, the "First-level Event...
Go to contribution page -
Dr Tobias Winchen (Vrije Universiteit Brussel)11/10/2016, 15:15Track 1: Online ComputingOral
The low flux of the ultra-high energy cosmic rays (UHECR) at the highest energies provides a challenge to answer the long standing question about their origin and nature. Even lower fluxes of neutrinos with energies above 10^22 eV are predicted in certain Grand-Unifying-Theories (GUTs) and e.g. models for super-heavy dark matter (SHDM). The significant increase in detector volume required to...
Go to contribution page -
Matteo Manzali (Universita di Ferrara & INFN (IT))12/10/2016, 11:15Track 1: Online ComputingOral
The INFN’s project KM3NeT-Italy, supported with Italian PON (National Operative Programs) fundings, has designed a distributed Cherenkov neutrino telescope for collecting photons emitted along the path of the charged particles produced in neutrino interactions. The detector consists of 8 vertical structures, called towers, instrumented with a total number of 672 Optical Modules (OMs) and its...
Go to contribution page -
Soohyung Lee (Institute for Basic Science)12/10/2016, 11:30Track 1: Online ComputingOral
Axion is a candidate of dark matter and is believed to be a breakthrough of strong CP problem in QCD [1]. CULTASK (CAPP Ultra-Low Temperature Axion Search in Korea) experiment is an axion search experiment which is being performed at Center for Axion and Precision Physics Research (CAPP), Institute for Basic Science (IBS) in Korea. Based on Sikivie’s method [2], CULTASK uses a resonant cavity...
Go to contribution page -
Dr William Badgett (Fermilab)12/10/2016, 11:45Track 1: Online ComputingOral
The LArIAT Liquid Argon Time Projection Chamber (TPC) in a Test Beam experiment explores the interaction of charged particles such as pions, kaons, electrons, muons and protons within the active liquid argon volume of the TPC detector. The LArIAT experiment started data collection at the Fermilab Test Beam Facility (FTBF) in April 2015 and continues to run in 2016. LArIAT provides important...
Go to contribution page -
Tobias Stockmanns (Forschungszentrum Jülich GmbH)12/10/2016, 12:00Track 1: Online ComputingOral
One of the large challenges of future particle physics experiments is the trend to run without a first level hardware trigger. The typical data rates exceed easily hundreds of GBytes/s, which is way too much to be stored permanently for an offline analysis. Therefore a strong data reduction has to be done by selection only those data, which is physically interesting. This implies that all...
Go to contribution page -
Dmitry Arkhipkin (Brookhaven National Laboratory)12/10/2016, 12:15Track 1: Online ComputingOral
One of the STAR experiment's modular Messaging Interface and Reliable Architecture framework (MIRA) integration goals is to provide seamless and automatic connections with the existing control systems. After an initial proof of concept and operation of the MIRA system as a parallel data collection system for online use and real-time monitoring, the STAR Software and Computing group is now...
Go to contribution page -
Dr Kenneth Richard Herner (Fermi National Accelerator Laboratory (US))12/10/2016, 12:30Track 1: Online ComputingOral
Gravitational wave (GW) events can have several possible progenitors, including binary black hole mergers, cosmic string cusps, core-collapse supernovae, black hole-neutron star mergers, and neutron star-neutron star mergers. The latter three are expected to produce an electromagnetic signature that would be detectable by optical and infrared
Go to contribution page
telescopes. To that end, the LIGO-Virgo... -
Alessandro Lonardo (Universita e INFN, Roma I (IT))12/10/2016, 12:45Track 1: Online ComputingOral
In order to face the LHC luminosity increase planned for the next years, new high-throughput network mechanisms interfacing the detectors readout to the software trigger computing nodes are being developed in several CERN experiments.
Go to contribution page
Adopting many-core computing architectures such as Graphics Processing Units (GPUs) or the Many Integrated Core (MIC) would allow to reduce drastically the size... -
Patricia Conde Muino (LIP Laboratorio de Instrumentacao e Fisica Experimental de Part)13/10/2016, 11:00Track 1: Online ComputingOral
General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at...
Go to contribution page -
David Rohr (Johann-Wolfgang-Goethe Univ. (DE))13/10/2016, 11:15Track 1: Online ComputingOral
ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN.
Go to contribution page
The High Level Trigger (HLT) is an online compute farm which reconstructs events measured by the ALICE detector in real-time.
The most compute-intensive part is the reconstruction of particle trajectories called tracking and the most important detector for tracking is the... -
Mr Felice Pantaleo (CERN - Universität Hamburg)13/10/2016, 11:30Track 1: Online ComputingOral
In 2019 the Large Hadron Collider will undergo upgrades in order to increase the luminosity by a factor two if compared to today's nominal luminosity. Currently CMS software parallelization strategy is oriented at scheduling one event per thread. However tracking timing performance depends from the factorial of the pileup leading the current approach to increase latency. When designing a HEP...
Go to contribution page -
Alessandro Degano (Universita e INFN Torino (IT)), Felice Pantaleo (CERN - Universität Hamburg)13/10/2016, 11:45Track 1: Online ComputingOral
The increase in instantaneous luminosity, number of interactions per bunch crossing and detector granularity will pose an interesting challenge for the event reconstruction and the High Level Trigger system in the CMS experiment at the High Luminosity LHC (HL-LHC), as the amount of information to be handled will increase by 2 orders of magnitude. In order to reconstruct the Calorimetric...
Go to contribution page -
Stefano Gallorini (Universita e INFN, Padova (IT))13/10/2016, 12:00Track 1: Online ComputingOral
In view of Run3 (2020) the LHCb experiment is planning a major upgrade to fully readout events at 40 MHz collision rate. This in order to highly increase the statistic of the collected samples and go further in precision beyond Run2. An unprecedented amount of data will be produced, which will be fully reconstructed real-time to perform fast selection and categorization of interesting events....
Go to contribution page -
Daniel Hugo Campora Perez (Universidad de Sevilla (ES))13/10/2016, 12:15Track 1: Online ComputingOral
The 2020 upgrade of the LHCb detector will vastly increase the rate of collisions the Online system needs to process in software, in order to filter events in real time. 30 million collisions per second will pass through a selection chain, where each step is executed conditional to its prior acceptance.
The Kalman Filter is a fit applied to all reconstructed tracks which, due to its time...
Go to contribution page -
Maxim Borisyak (National Research University Higher School of Economics (HSE) (RU); Yandex School of Data Analysis (RU))13/10/2016, 14:00Track 1: Online ComputingOral
The CRAYFIS experiment proposes usage of private mobile phones as a ground detector for Ultra High Energy Cosmic Rays. Interacting with Earth's atmosphere they produce extensive particle showers which can be detected by cameras on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As they interact with CMOS detector they leave low-energy tracks that sometimes...
Go to contribution page -
Christian Faerber (CERN)13/10/2016, 14:15Track 1: Online ComputingOral
The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a 'triggerless' readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which also has to be processed to...
Go to contribution page -
507. HEP Track Finding with the Micron Automata Processor and Comparison with an FPGA-based SolutionJohn Freeman (Fermi National Accelerator Lab. (US))13/10/2016, 14:30Track 1: Online ComputingOral
Moore’s Law has defied our expectations and remained relevant in the semiconductor industry in the past 50 years, but many believe it is only a matter of time before an insurmountable technical barrier brings about its eventual demise. Many in the computing industry are now developing post-Moore’s Law processing solutions based on new and novel architectures. An example is the Micron...
Go to contribution page -
Heiko Engel (Johann-Wolfgang-Goethe Univ. (DE))13/10/2016, 14:45Track 1: Online ComputingOral
ALICE (A Large Ion Collider Experiment) is a detector system
Go to contribution page
optimized for the study of heavy ion collision detector at the
CERN LHC. The ALICE High Level Trigger (HLT) is a computing
cluster dedicated to the online reconstruction, analysis and
compression of experimental data. The High-Level Trigger receives
detector data via serial optical links into custom PCI-Express
based FPGA... -
Simone Stracka (Universita di Pisa & INFN (IT))13/10/2016, 15:00Track 1: Online ComputingOral
The goal of the “INFN-RETINA” R&D project is to develop and implement a parallel computational methodology that allows to reconstruct events with an extremely high number (>100) of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus matching the requirements for processing LHC events at the full crossing frequency.
Our approach relies on a massively parallel...
Go to contribution page -
Maxim Borisyak (National Research University Higher School of Economics (HSE) (RU); Yandex School of Data Analysis (RU))13/10/2016, 15:15Track 1: Online ComputingOral
High-energy physics experiments rely on reconstruction of the trajectories of particles produced at the interaction point. This is a challenging task, especially in the high track multiplicity environment generated by p-p collisions at the LHC energies. A typical event includes hundreds of signal examples (interesting decays) and a significant amount of noise (uninteresting examples).
This...
Go to contribution page