2nd Computing Challenges workshop (COMCHA), A Coruña
The goal of this workshop is to give an overview of the work and plans of the different groups working in the software, trigger and data acquisition systems of high energy physics experiments, as well as to establish new synergies. The current status, operation and upgrade of the ATLAS, CMS and LHCb experiments will be discussed.The input to the European Strategy will also be discussed.
-
-
1
WelcomeSpeaker: Diego Martinez Santos (University of A Coruna - UDC (ES))
-
11:00
Coffee break
-
Computing challenges in Calorimetry
-
2
Versal ACAP processing for ATLAS-TileCal signal reconstruction
Particle detectors at accelerators generate large amount of data, requiring analysis to derive insights. Collisions lead to signal pile up, where multiple particles produce signals in the same detector sensors, complicating individual signal identification. This contribution describes the implementation of a deep learning algorithm on a Versal ACAP device for improved processing via parallelization and concurrency. Connected to a host computer via PCIe, this system aims for enhanced speed and energy efficiency over CPUs and GPUs. In the contribution, we will describe in detail the data processing and the hardware, firmware and software components of the signal reconstruction system for the ATLAS-Tile Calorimeter which will be running in real time in the HL-LHC era. The contribution presents the implementation of the deep learning algorithm on Versal ACAP device, as well as the system for transferring data in an efficient way. In addition, the system integration tests and results from the tests with beam performed at CERN will be presented.
Speaker: Francisco Hervas Alvarez (Univ. of Valencia and CSIC (ES)) -
3
Correction of Bremsstrahlung emissions for electrons at the LHCb experiment
When electrons are produced at the LHCb experiment, they usually have a long way to go until they reach the electronic calorimeter (ECAL). In this journey, they will traverse layers and layers of material from the detector, which will cause them to lose energy due to Bremsstrahlung emission in the form of photons. When these photons are emitted before the magnet of the detector, the electrons that produce them will hit a different ECAL region than them due to the magnet bending their trajectory. This poses a big challenge for reconstructing the original energy of electrons, which is crucial for studying a big number of decays generated at the LHCb. Current approaches consist on extrapolating the trajectories of electrons before the magnet in order to find an agreeing region between energy deposits of the electrons and their emitted Bremsstrahlung photons. Nevertheless, this method is not perfect. Therefore, Machine Learning approaches are being considered to improve electron energy reconstruction and very preliminary results are presented in this work.
Speaker: Paloma Laguarta González (University of Barcelona (ES)) -
4
LHCb Upgrade I calorimeter reconstruction
In the LHCb experiment, the calorimeters (ECAL and HCAL) are is divided into regions with different dimensions and sensors sizes.
The energy is measured by considering signal in the clusters of CALO cells. In this talk, a new method of clusterisation for the LHCb ECAL is proposed.Speaker: Alessandra Gioventù (University of Barcelona (ES)) -
5
Data-driven evaluation of the electron identification performance with LHCb 2024 data
In this work, the performance for the identification of electrons and misidentification of pions as electrons is measured for 2024 LHCb data. The detector and the reconstruction have changed significantly for Run 3, so it is important to validate the electron identification performance with the early data. Electron identification is evaluated by the electron reconstruction algorithms in the trigger system. High-statistics and high-purity calibration samples collected in the calibration stream of the high-level trigger are used to evaluate the electron identification performance using the tag-and-probe method. The decay channel chosen for the evaluation of the ID efficiencies is $B \to J/\psi (\to ee) K$, whereas for the misID, the $D^{*} \to D^0 (\to K\pi) \pi$ is used. The method to obtain the efficiencies involves a BDT that is trained to efficiently discriminate signal from combinatorial background. Then, a simultaneous fit to the "all" and "pass" samples is performed to obtain the final result.
Speaker: Pol Vidrier Villalba (University of Barcelona (ES))
-
2
-
13:30
Lunch Break
-
Session B
-
6
Development and deployment of Artificial Intelligence algorithms for CTA Telescopes
Imaging Atmospheric Cherenkov Telescopes (IACT) use combined analog and digital electronics for their trigger systems, implementing simple but fast algorithms. Such trigger techniques are forced by the extremely high data rates and strict timing requirements. In recent years, in the context of a new camera design for the Large-Sized Telescopes (LSTs) of the Cherenkov Telescope Array (CTA) based on Silicon PhotoMultipliers (SiPM), a new fully digital trigger system incorporating Artificial Intelligence (AI) algorithms is being developed. The critical improvement relies on implementing those algorithms in Field Programmable Gate Arrays (FPGAs), to increase the sensitivity and efficiency of real-time decision-making while fulfilling timing constraints. In addition, building on our prior experience in IACT event reconstruction using Deep Learning (DL), we are currently engaged in applying analogous algorithms to address the challenge of offline reducing the CTA data volume.
We are currently developing all the elements of an AI-based IACT trigger system, including a PCB prototype to test multi-gigabit optical transceivers and using development boards as an AI-algorithm testbench. We also aim to integrate DL capabilities into the CTA offline analysis pipeline, seeking a more efficient processing chain in both computational and storage aspects.
J.A. Barrio, A. Cerviño, J.L. Contreras, M. López, D. Martín, D. Nieto, A. Pérez, L.A. Tejedor
Grupo de Altas Energías, Instituto de Física de Partículas y del Cosmos, Universidad Complutense de MadridSpeaker: Prof. Juan Abel Barrio (Institute of Particle and Cosmos Physics - Universidad Complutense de Madrid (IPARCOS-UCM)) -
7
Developments for Real Time Reconstruction of PbPb collisions in LHCb Run3 trigger
The LHCb experiment has proved its huge potential in the field of heavy ion collisions. However, PbPb collisions produce a high occupancy regime which is not only challenging at hardware level, also at software level. In order to keep the high quality track and PID reconstructions shown in pp collisions, some modifications to LHCb HLT2 trigger reconstruction are needed, especially regarding the ghost track rejection. In addition, some limitations at long track reconstruction appear at high multiplicity events, so new tracking alternatives are required. In this context, two tasks are being performed:
- Training of Neuronal-Networks for ghost track rejection, with which the ghost track rate is expected to be reduced without significant efficiency loss
- Development of a new tracking algorithm al CPU of matching between Upstream Tracks and the muon system for dimuon decays reconstruction, which can be used for trigger lines regarding tracking efficiency tables for PbPb
Both tasks are being prepared for the next PbPb data-taken in November 2024 and are expected to improve significately the performance compared to the previous heavy ion runs.Speaker: Ivan Cambon Bouzas (Universidade de Santiago de Compostela (ES)) -
8
HyperK reconstruction in GPUSpeaker: Jeremy Peter Dalseno (Universidade de Santiago de Compostela (ES))
-
6
-
16:30
Coffee break
-
Session I
-
9
Sustainability of real-time analysis at 5 TB/s data rate
The study of power consumption and sustainability of LHC trigger systems is imperative in view of the next high luminosity era for the LHC collider, which will largely increase the output data rate beyond several tens of TB/s. In this talk, we will show the work performed at IFIC in the context of the High-Low project, including some of the proposals that can be considered to optimize energy usage in terms of the computing architectures and the efficiency of the algorithms running on them.
Speaker: Volodymyr Svintozelskyi (Univ. of Valencia and CSIC (ES)) -
10
Allen optimozation
This thesis presents a set of optimization efforts within the Allen framework at CERN’s
LHCb experiment, with a specific focus on increasing throughput and obtaining determin-
istic behaviour on both the CPU and GPU executions. The key area of development are the
algorithms working with events containing luminosity data, and their tests. These algorithms
were detected to be a bottleneck during the March 2024 run of LHC. These optimizations led
to speedups between 8.3x and 29.2x, obtaining a full-sequence throughput gain of up to 14%
on GPU, using the same set-up (sequence and data) with which the issue was first found.
Other areas of investigation include the study of the reduction of monitoring overhead
and the stability of the CI/CD pipeline testsSpeaker: Sergio Andres Estrada (University of A Coruna - UDC (ES)) -
11
Porting MADGRAPH to FPGA using High-Level Synthesis (HLS)
The escalating demand for data processing in particle physics research has spurred the exploration of novel technologies to enhance efficiency and speed of calculations. This study presents the development of a porting of MADGRAPH, a widely used tool in particle collision simulations, to FPGA using High-Level Synthesis (HLS).
Experimental evaluation is ongoing, but preliminary assessments suggest a promising enhancement in calculation speed compared to traditional CPU implementations. This potential improvement could enable the execution of more complex simulations within shorter time frames.
This study describes the complex process of adapting MADGRAPH to FPGA using HLS, focusing on optimizing algorithms for parallel processing. A key aspect of the FPGA implementation of the MADGRAPH software is reduction of the power consumption, which important implications for the scalability of computer centers and for the environment. These advancements could enable faster execution of complex simulations, highlighting FPGA's crucial role in advancing particle physics research and its environmental impact.Speaker: Hector Gutierrez Arance (Univ. of Valencia and CSIC (ES))
-
9
-
1
-
-
Trigger strategies at the LHC
- 12
- 13
- 14
-
11:00
Coffee break
-
15
KOTO and KOTO II DAQSpeaker: Dr Chieh Lin (National Changhua University of Education)
-
Computing challenges in Muon systems
-
16
Development of the Phase-2 CMS Overlap Muon Track Finder: Advancing Muon Reconstruction with HLS and Graph Neural Networks
The Overlap Muon Track Finder (OMTF) is a key subsystem of the CMS L1 Trigger, and for the CMS phase-2 upgrade during the High-Luminosity Large Hadron Collider era, a new version of the OMTF is being developed. This upgraded version, implemented on a custom ATCA board with a Xilinx UltraScale+ FPGA and 25 Gbps optical transceivers, focuses on improving the muon trigger algorithm and input data pre-processing using High-Level Synthesis (HLS). Furthermore, the potential of Graph Neural Networks (GNNs) is explored to enhance the reconstruction of transverse momentum and position of muons by utilizing the graph structure of the reconstructed stubs from each muon chamber. This aims to improve the accuracy and speed of muon reconstruction while meeting the real-time processing demands of the CMS detector as well as exploring the AI capabilities of the Versal ACAPs. The design, verification results, and experiences in both standard and non-standard HLS workflows, along with a starting point for hardware testing of GNN models on FPGAs, are presented.
Speaker: Pelayo Leguina (Universidad de Oviedo (ES)) -
17
Muon trigger primitive generation with the Drift Tubes detector at CMS for the HL-LHC
In view of the upcoming high-luminosity operations of the LHC (HL-LHC), significant upgrades in the CMS trigger system are foreseen to maintain high physics selectivity with finer granularity and more robust readout electronics. The present Drift Tube (DT) on detector electronics will be replaced by new readout boards which will perform the time digitisation of the signals inside radiation-tolerant FPGAs achieving a high integration. The digitized signals will be streamed via high-speed optical links to the backend system, which will generate trigger primitives providing precise reconstruction of the muon’s position, direction and collision time. Currently the reconstruction of these primitives makes use of an analytical solution, which has been implemented both as software and firmware and tested in data and simulation, probing an offline reconstruction performance at hardware level. Neural Networks are currently under study towards more performant pattern recognition and non-track object reconstruction profiting from the increased flexibility and computational power of the backend’s FPGA system.
Speaker: Cristina Martin Perez (CIEMAT) -
18
Triggering on muon showers in the Barrel Muon Trigger of the CMS experiment for the HL-LHC upgrade
Phase-2 CMS upgrade will replace the trigger and data acquisition system in preparation for the HL-LHC. This upgrade will allow a maximum accept rate of 750kHz and a latency of 12.5us. To achieve this, new electronics and firmware are being designed with an expectation of significantly improving the physics reach of the current system.
In this talk we describe the first version of an algorithm capable of detecting and identifying muon showers, running in the first layer of the trigger system. It was designed to be implemented on FPGAs with minimum resource utilization, increasing the robustness of the current algorithm and enable to improve the loss of efficiency that could be introduced by showering events hiting the muon system.
Speaker: Santiago Folgueras (Universidad de Oviedo (ES)) -
19
Muon misID evaluation and correction at the LHCb
Particle Identification (PID) is crucial for all analysis at LHCb. The PID machinery at this experiment includes both hardware and software resources to distinguish between electrons, kaons, pions, muons and protons. A key part in particle identification is the estimation of the efficiencies of PID selection criteria through data-driven methods. For that, the tool PIDCalib (Particle IDentification Calibration) was created. Despite its proved usefulness, this tool also comes with its flaws. In this talk, I will address the fact that PIDCalib can not account for decays-in-flight (DIF) of pions and kaons to muons, which is a huge source of muon misidentification (misID). Moreover, I will present the plans for a tool that can evaluate and correct the muon misID.
Speaker: Alejandro Rodriguez Alvarez (University of Barcelona (ES))
-
16
-
13:35
Lunch Break
-
From High Energy Physics to Industry
-
20
A path through the trigger - from physics to software engineering
High-energy physics is at the forefront in the transformation of research into a computing-intensive field. This process, already challenging for large collaborations at the LHC, can strain the resources of smaller teams who are facing technical challenges that require a high level of coding ability.
The eScience center [https://www.esciencecenter.nl/] is an organisation funded by the Dutch government that proposes to solve this issue by creating a large team of software developers specialised in research. In this talk, I will describe how the challenges of the LHCb upgrade molded my career to fit this specific profile, and the landscape of research beyond the traditional paths.
Speaker: Louis Henry (EPFL - Ecole Polytechnique Federale Lausanne (CH)) -
21
VirtuaLearn3D: A tale of point clouds and synergies
VirtuaLearn3D++: Algorithms from unstructured data spaces. From geography and engineering to high-energy physics.
Finding general solutions for geometric problems has been a complex concern thoroughly studied since the XIX century. The fundamental contribution of David Hilbert through the Nullstellensatz equipped us with a dictionary between algebra and geometry. Then, any geometry that can be translated into algebra, the fundamental language of algorithms, can be computed. The VirtuaLearn3D++ computational framework provides general algorithmic solutions to any problem whose geometry can be represented as a finite set of points. So far, this framework has been used to solve scientific and industrial problems in the domains of geography and engineering, like leaf-wood segmentation and point-wise classification in urban contexts. This outstanding technology has been scientifically proven to generate models solely from simulations that generalize to unseen real data. Potential applications to high energy physics, e.g. data acquisition and trigger, will be discussed.
Speaker: Carlos Vazquez Sierra (Universidade de Santiago de Compostela (ES)) -
22
From HEP to industry: PropylonSpeaker: Dr Marcos Romero Lamas
-
20
-
16:30
Coffee break
-
European Strategy Discussion
-
21:30
Workshop dinner
-
-
-
Computing challenges in tracking
-
23
Downstream tracking at LHCb
In this talk the new "Downstream" algorithm developed at LHCb is reviewed, at both HLT1 and HLT2 trigger levels. At HLT1, the algorithm is able to reconstruct and select very displaced vertices in real time, making use of the Upstream Tracker (UT) and the Scintillator Fiber detector (SciFi) of LHCb, and being executed on GPUs inside the Allen framework. In addition to an optimized strategy, it utilizes a Neural Network (NN) implementation to increase the track efficiency and reduce the ghost rates, with very high throughput and limited time budget. The Downstream algorithm and the associated two-track vertexing will largely increase the LHCb physics potential for detecting long-lived particles during the Run3.
Speaker: Jiahui Zhuo (Univ. of Valencia and CSIC (ES)) -
24
Faraway tracking at LHCb
One of the main challenges at LHCb is coming from the reconstruction of tracklets in the Scintillator Fiber detector (SciFi) in real-time, due to the large hit combinatorics in this detector. The new “Faraway” algorithm which has an innovative strategy for the reconstruction and vertexing of two SciFi-tracks is presented here, with the present performance and future prospects. The development of this algorithm could largely increase the potential of LHCb to detect long-lived particles with lifetimes of hundreds of nanoseconds.
Speaker: Volodymyr Svintozelskyi (Univ. of Valencia and CSIC (ES)) -
25
BuSca: a Buffer Scanner at 30 MHz data rate for New Long-Lived Particle Searches at LHCb
BuSca is a prototype algorithm at LHCb designed for real-time BSM particle searches, and focused on downstream reconstructed tracks, detected exclusively by the UT and SciFi detectors. By projecting physics candidates onto 2D histograms of flight distance and mass hypotheses at 30 MHz rate, BuSca identifies hot spots indicative of potential candidates of new particles, thereby providing strategic guidance for the development of new trigger lines. Additionally, BuSca offers an Armenteros-Podolanski representation, providing insights into the mass hypotheses of the decay products associated with the new particle. The performance of BuSca, including the outcomes of its initial prototype on simulated data, will be presented in this talk.
Speaker: Valerii Kholoimov (Instituto de Física Corpuscular (Univ. of Valencia)) -
26
Quantum Computing and Tracking
The expected increase in the recorded dataset for future upgrades of the main experiments at the Large Hadron Collider (LHC) at CERN, including the LHCb detector, while having a limited bandwidth, comes with computational challenges that classic computing struggles to solve. Emerging technologies such as Quantum Computing (QC), which exploits the principles of superposition and interference, have great potential to play a major role in solving these issues.
Significant progress has been made in the field of QC applied for particle physics, laying the ground for applications closer to a realistic scenario, especially for track reconstruction of charged trajectories within experimental setups like the LHCb. This is one of the biggest computational challenges for such an experiment as it must be performed at a high rate of 1010 tracks per second while maintaining a very high reconstruction efficiency.
In this talk, the application of two of the most well-known QC algorithms will be presented to deal with track reconstruction at one of the main LHC experiments, LHCb: the Harrod-Hassidim-Lloyd (HHL) algorithm for solving linear systems of equations and the Quantum Approximate Optimization Algorithm (QAOA), specialized in combinatorial problems. Results running the algorithms using increasingly complex simulated events will be shown, including actual LHCb simulated samples. Finally, ongoing and future work to make the running of these algorithms efficient in QC hardware will be discussed.Speaker: Miriam Lucio Martinez (Nikhef National institute for subatomic physics (NL))
-
23
-
11:00
Coffee break
-
Session 1
-
27
Differentiable programming for the frontiers of computation: methods and new perspectives
Designing the next generation colliders and detectors involves solving optimization problems in high-dimensional spaces where the optimal solutions may nest in regions that even a team of expert humans would not explore.
Furthermore, the large amount of data we need to generate to study physics for the next runs of large HEP machines and that we will need for future colliders is staggering, requiring rethinking of our simulation and reconstruction paradigm.
Differentiable programming enables the incorporation of domain knowledge, encoded in simulation software, into gradient-based pipelines, resulting in the capability of optimizing a given simulation setting and performing inference through classically intractable settings.
In this talk I will describe the first proofs-of-concept of gradient-based optimization of experimental design, with a focus on large-scale simulation software, and will briefly touch on implementations in neuromorphic hardware architectures, paving the way to more complex challenges.
Speaker: Dr Pietro Vischia (Universidad de Oviedo and Instituto de Ciencias y Tecnologías Espaciales de Asturias (ICTEA)) -
28
Automatic optimization of a Parallel-Plate Avalanche Counter with Optical Readout
We propose an optimization system for a Parallel-Plate Avalanche Counter with Optical Readout designed for heavy-ion tracking and imaging. Exploiting differentiable programming, we model the reconstruction of the position for different detector configurations and build an optimization cycle that minimizes an objective function. We analyze the performance improvement using this method, exploring the potential of these techniques with the ultimate goal of fully designing a tomography system based on neutrons.
Speaker: Ms Maria Pereira Martinez (Universidade de Santiago de Compostela (ES)) -
29
Technology trends and hardware architectures for HEP computing
In recent years, the incorporation of new hardware architectures at the trigger level has significantly enhanced the potential of LHC experiments. This includes the use of FPGAs and GPUs for real-time fast track reconstruction. In this talk, we will review the key aspects of these advancements, examine current technology trends, and explore the emerging strategies being developed by the high-energy physics community to further increase the data-taking capabilities of LHC experiments.
Speaker: Brij Kishor Jashal (RAL, TIFR and IFIC)
-
27
-
30
Closing remarksSpeaker: Arantza De Oyanguren Campos (Univ. of Valencia and CSIC (ES))
-