Conveners
Track X – Crossover sessions: Optimisation and acceleration
- Yu Nakahama Higuchi (Nagoya University (JP))
- Teng Jian Khoo (Universite de Geneve (CH))
Track X – Crossover sessions: Collaborative and common software
- Paul James Laycock (Brookhaven National Laboratory (US))
- Steven Schramm (Universite de Geneve (CH))
Large-scale particle physics experiments face challenging demands for high-throughput computing resources both now and in the future. New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains. The growing applications of machine learning algorithms in particle...
In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous read out of minimum bias Pb-Pb collisions. The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage. The significant increase...
In 2021 the LHCb experiment will be upgraded, and the DAQ system will be based on full reconstruction of events, at the full LHC crossing rate. This requires an entirely new system, capable of reading out, building and reconstructing events at an average rate of 30 MHz. In facing this challenge, the system could take advantage of a fast pre-processing of data on dedicated FPGAs. We present the...
The pattern recognition of the trajectories of charged particles is at the core of the computing challenge for the HL-LHC, which is currently the center of a very active area of research. There has also been rapid progress in the development of quantum computers, including the D-Wave quantum annealer. In this talk we will discuss results from our project investigating the use of annealing...
The HL-LHC will see ATLAS and CMS see proton bunch collisions reaching track multiplicity up to 10.000 charged tracks per event. Algorithms need to be developed to harness the increased combinatorial complexity. To engage the Computer Science community to contribute new ideas, we have organized a Tracking Machine Learning challenge (TrackML). Participants are provided events with 100k 3D...
The future High Energy Physics experiments, based on upgraded or next generation particle accelerators with higher luminosity and energy, will put more stringent demands on the simulation as far as precision and speed are concerned. In particular, matching the statistical uncertainties of the collected experimental data, will require the simulation toolkits to be more CPU-efficient, while...
The ALICE experiment at the Large Hadron Collider (LHC) at CERN will deploy a combined online-offline facility for detector readout and reconstruction, as well as data compression. This system is designed to allow the inspection of all collisions at rates of 50 kHz in the case of Pb-Pb and 400 kHz for pp collisions in order to give access to rare physics signals. The input data rate of up to...
Data acquisition (DAQ) systems are a key component for successful data taking in any experiment. The DAQ is a complex distributed computing system and coordinates all operations, from the data selection stage of interesting events to storage elements.
For the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), the experiments at CERN need to meet challenging requirements to record...
The need for an unbiased analysis of large complex datasets, especially those collected by the LHC experiments, is pushing for data acquisition systems where predefined online trigger selections are limited if not suppressed at all. Not just this poses tremendous challenges for the hardware components, but also calls for new strategies for the online software infrastructures. Open source...
Athena is the software framework used in the ATLAS experiment throughout the data processing path, from the software trigger system through offline event reconstruction to physics analysis. The shift from high-power single-core CPUs to multi-core systems in the computing market means that the throughput capabilities of the framework have become limited by the available memory per process. For...
The reconstruction of trajectories of the charged particles in the tracking detectors of high energy physics experiments is one of the most difficult and complex tasks of event reconstruction at particle colliders. As pattern recognition algorithms exhibit combinatorial scaling to high track multiplicities, they become the largest contributor to the CPU consumption within event reconstruction,...
Future HEP experiments require detailed simulation and advanced reconstruction algorithms to explore the physics reach of their proposed machines and to design, optimise, and study the detector geometry and performance. To synergise the development of the CLIC and FCC software efforts, the CERN EP R&D road map proposes the creation of a "Turnkey Software Stack", which is foreseen to provide...