Connecting The Dots 2023

Europe/Zurich
Toulouse

Toulouse

Le Village - Espaces Événementiels, 31 allée Jules Guesde, 31000 Toulouse, France
Description

8th International Connecting The Dots Workshop


The Connecting The Dots workshop series brings together experts on track reconstruction and other problems involving pattern recognition in sparsely sampled data. While the main focus will be on High Energy Physics (HEP) detectors, the Connecting The Dots workshop is intended to be inclusive across other scientific disciplines wherever similar problems or solutions arise. 

The 2023 edition will be hosted in Toulouse (France). It is the 8th in the series after: Berkeley 2015, Vienna 2016, Orsay 2017, Seattle 2018, Valencia 2019, virtual in 2020 and Princeton 2022.

The workshop is plenary sessions only, with a mix of invited talks and submitted contributions. There will also be a Poster session.

CTD 2023 is organised as an in-person conference and no remote presentation is foreseen. We expect all presenters to register.

The last day, Friday 13 October, is dedicated to a satellite mini-workshop on Real time Tracking : triggering events with tracks, see the dedicated indico page. Registration to the mini-workshop are free and independent of the main CTD conference (and if you register to CTD, you are not automatically registered to the mini-workshop).


Important dates

Abstract submission: 26 May - 30 June 14 July 2023 (The call for abstracts is now closed)
Registration deadlines : Early-bird 1st September, otherwise 22 September 2023.

Fees 
- Standard 350€
- Early Bird (up to 01/09/2023) 315€
- Students 220€

This fee covers local support, morning and afternoon coffee breaks, lunches, the welcome reception and workshop dinner.

 

 

Participants
  • Alex Gekow
  • Alexander J Pfleger
  • Alexis Vallier
  • Alina Lazar
  • Andreas Salzburger
  • Andrew George Morris
  • Anthony Correia
  • Arantza Oyanguren
  • Brij Kishor Jashal
  • Catherine Biscarat
  • Christian Wessel
  • christophe collard
  • Daniel Thomas Murnane
  • David Lange
  • David Rousseau
  • Erica Brondolin
  • fabrizio alfonsi
  • Florencia Luciana Castillo
  • Fotis Giasemis
  • Francesco Terzuoli
  • Giuseppe Cerati
  • Hadrien Benjamin Grasland
  • Heberth Torres
  • Jan Stark
  • Javier Prado Pico
  • Jeremy Couthures
  • Jonathan Guiang
  • Jonathan Long
  • Ke Li
  • Kilian Lieret
  • Kunihiro Nagano
  • Layan Mousa Salem AlSarayra
  • Louis Henry
  • Louis-Guillaume Gagnon
  • Luis Falda Coelho
  • Luise Meyer-Hetling
  • Marcin Jastrzebski
  • Mark Nicholas Matthewman
  • Markus Elsing
  • Matthew Stortini
  • Michael J. Morello
  • Michel De Cian
  • Mine Gokcen
  • Minh-Tuan Pham
  • Nabil Garroum
  • Noemi Calace
  • Paolo Calafiura
  • Paul Gessinger
  • Peter Stratmann
  • Philipp Zehetner
  • Qiyu Sha
  • Sachin Gupta
  • Salvador Marti I Garcia
  • Sebastian Dittmeier
  • Sylvain Caillou
  • Vadim Kostyukhin
  • Vladimir Gligorov
  • Xiangyang Ju
  • +14
Local Organizers
    • 09:00 10:20
      Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Alexis Vallier (L2I Toulouse, CNRS/IN2P3, UT3)
      • 09:00
        Welcome from L2IT Director 5m
        Speaker: Jan Stark (Laboratoire des 2 Infinis - Toulouse, CNRS / Univ. Paul Sabatier (FR))
      • 09:05
        Welcome & Introduction 15m
        Speaker: Alexis Vallier (L2I Toulouse, CNRS/IN2P3, UT3)
      • 09:20
        High Pileup Particle Tracking with Object Condensation 25m

        Recent work has demonstrated that graph neural networks (GNNs) can match the performance of traditional algorithms for charged particle tracking while improving scalability to meet the computing challenges posed by the HL-LHC. Most GNN tracking algorithms are based on edge classification and identify tracks as connected components from an initial graph containing spurious connections. In this talk, we consider an alternative based on object condensation (OC), a multi-objective learning framework designed to cluster points (hits) belonging to an arbitrary number of objects (tracks) and regress the properties of each object. Building on our previous results, we present a streamlined model and show progress toward a one-shot OC tracking algorithm in a high-pileup environment.

        Speakers: Gage DeZoort (Princeton University (US)), Kilian Lieret (Princeton University)
      • 09:50
        Track Finding-and-Fitting with Influencer Object Condensation 25m

        ML-based track finding algorithms have emerged as competitive alternatives to traditional track reconstruction methods. However, a major challenge lies in simultaneously finding and fitting tracks within a single pass. These two tasks often require different architectures and loss functions, leading to potential misalignment. Consequently, achieving stable convergence becomes challenging when incorporating both finding and fitting in a multi-task loss framework.

        To address this issue, we propose to use a solution called object condensation, which aims to find a representative point for track building while serving as the target for parameter regression. Specifically, we leverage the recently-introduced Influencer approach, where each hit can act as both a representative and a representee, which has been shown to allow robust track building. In this work, we present the results obtained by utilizing the Influencer model for the combined track finding-and-fitting task. We evaluate the performance benefits of treating fitting as an auxiliary task to enhance track finding and compare the physics performance and resource utilization against the typical sequential finding-then-fitting pipeline.

        Speaker: Daniel Thomas Murnane (Lawrence Berkeley National Lab. (US))
    • 10:20 10:30
      Poster Flash Talk Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Alexis Vallier (L2I Toulouse, CNRS/IN2P3, UT3)
      • 10:20
        Flash Talk: Graph Neural Network-based Tracking as a Service 3m

        Recent studies have shown promising results for track finding in dense environments using Graph Neural Network (GNN)-based algorithms. These algorithms not only provide high track efficiency but also offer reasonable track resolutions. However, GNN-based track finding is computationally slow on CPUs, necessitating the use of coprocessors like GPUs to accelerate the inference time. Additionally, due to the substantial graph size typically involved (consisting of approximately 300k nodes and 1M edges), significant GPU memory is required to ensure efficient computation. Not all computing facilities used for particle physics experiments are equipped with high-end GPUs such as NVIDIA A100s or V100s, which can meet the computational requirements. These computing challenges must be addressed in order to deploy GNN-based track finding into production. We propose addressing these challenges by establishing the GNN-based track finding algorithm as a service hosted either in the cloud or high-performance computing centers.

        In this talk, we will describe the implementation of the GNN-based track finding workflow as a service using the Nvidia Triton Inference Server. The pipeline contains three discrete deep-learning models and two CUDA-based algorithms. Because of the heterogeneity in the workflow, we explore different server configurations? to maximize the throughput of track finding and the GPU utilization. We also study the scalability of the inference server using the Perlmutter supercomputer at NERSC and cloud resources like AWS and Google Cloud.

        Speaker: Xiangyang Ju (Lawrence Berkeley National Lab. (US))
      • 10:23
        Flash Talk: Improvement of event-building for data-driven hybrid pixel detector data 3m

        Hybrid pixel detectors like Timepix3 and Timepix4 detect individual pixels hit by particles. For further analysis, individual hits from such sensors need to be grouped into spatially and temporally coinciding groups called clusters. While state-of-the-art Timepix3 detectors generate up to 80 Mio hits per second, the next generation, Timepix4, will provide data rates of up to 640 Mio hits (data bandwidth of up to 164 Gbps), which is far beyond the current capabilities of the real-time clustering algorithms, processing at roughly 3 MHits/s. We explore the options for accelerating the clustering process, focusing on its real-time application. We developed a tool that utilizes multicore CPUs to speed up the clustering. Despite the interdependence of different data subsets, we achieve a speed-up scaling with the number of used cores. Further, we exploited options to reduce the computational demands of the clustering by determining radiation field parameters from raw (unclustered) data features and self-initiating further clustering if these data show signs of interesting events. This further accelerates the clustering while also reducing storage space requirements. The proposed methods were validated and benchmarked using real-world and simulated datasets.

        Speaker: Tomas Celko (Czech Technical University in Prague (CZ))
      • 10:26
        Flash Talk: Seeding with Machine Learning in ACTS 3m

        To prepare for the High Luminosity phase of the Large Hadron Collider at CERN (HL-LHC), the ATLAS experiment is replacing its innermost components with a full-silicon tracker (ITk), to improve the spatial resolution of the tracks measurements and increase the data readout rate. However, this upgrade alone will not be sufficient to cope with the tremendous increase of luminosity, and significant improvements have to be incorporated into the existing tracking software to keep the required computing resources at a realistic level.

        In this poster, we are focusing on the track seeds reconstruction within the ITk detector, and we explore the possibility to use hashing techniques to improve the seed reconstruction efficiency, limit the combinatorics and eventually reduce the computing time. Metric learning is then used to tune our algorithm for the different regions of the detector, and to increase the robustness against time-dependent detector conditions.

        The code developments are done within the ACTS framework, an experiment-independent toolkit for charged particles track reconstruction.

        Speaker: Jeremy Couthures (Centre National de la Recherche Scientifique (FR))
    • 10:30 11:00
      Coffee Break 30m Place du village (Le Village)

      Place du village

      Le Village

    • 11:00 12:30
      Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Andreas Salzburger (CERN)
      • 11:00
        CEPC tracking performance with ACTS 25m

        The Circular Electron Positron Collider (CEPC) is a physics program proposal with the goal of providing high-accuracy measurements of properties of the Higgs, W and Z bosons, and exploring new physics beyond the SM (BSM). The CEPC is also an excellent facility to perform precise tests of the theory of the strong interaction.
        To deliver those physics programs, the CEPC detector concepts must meet the stringent performance requirements. The majority of the visible particles at CEPC are charged particles whose multiplicity can be as high as 100. An efficient separation of these particles provides a solid basis for the reconstruction and identification of physics objects, the high-level objects such as leptons, photons and jets that are input to physics analyses. Therefore, the CEPC detector should have excellent track finding efficiency and track momentum resolution. For example, for tracks within the detector acceptance and transverse momenta larger than 1 GeV, a track finding efficiency better than 99% is required.

        A Common Tracking Software (ACTS) project aims to provide an open-source experiment-independent and framework-independent software designed for modern computing architectures based on the tracking experience at LHC. It provides a set of high-level performant track reconstruction tools which are agnostic to the details of the detection technologies and magnetic field configuration, and tested for strict thread-safety to support multi-threaded event processing. ACTS has been used as a tracking toolkit at experiments such as ATLAS, sPHENIX, ALICE, STCF etc. and has shown very promising tracking performance in terms of both physics performance and time performance. In particular, recently, implementation of ACTS for STCF, which is the first application of ACTS for a drift chamber, is made and promising performance is achieved.

        In this talk, we will report on development of the CEPC track reconstrcon software based on the detection information from Silicon Trackers and a Main Drift Chamber using the Kalman Filter based track finding and fitting algorithms of ACTS. The tracking performance for a tracking system with a drift chamber and a track mulplicity of 100 (which is much higher than that at STCF) will be presented.

        Speaker: Xiaocong Ai (Zhengzhou University)
      • 11:30
        k4Clue: the CLUE Algorithm for Future Collider Experiments 25m

        CLUE is a fast and innovative density-based clustering algorithm to group digitized energy deposits (hits) left by a particle traversing the active sensors of a high-granularity calorimeter in clusters with a well-defined seed hit. It was developed in the context of the new high granularity sampling calorimeter (HGCAL) which will be installed in the forward region of the Compact Muon Solenoid (CMS) experiment as part of its HL-LHC upgrade. Its outstanding performance in terms of high efficiency and excellent computing timing has been proven in the context of the CMS Phase-2 upgrade using both simulated and test beam data.

        Initially, CLUE was developed in a standalone repository to allow performance benchmarking with respect to its CPU and GPU implementations, demonstrating the power of algorithmic parallelization in the coming era of heterogeneous computing. In recent years, CLUE’s capabilities outside CMS and, more specifically, at experiments at future colliders, were tested by adapting it to run in the Turnkey Software Stack (key4hep) framework. The new package, k4Clue, is now fully integrated into the Gaudi software framework and it now supports EDM4hep data format for inputs and outputs.

        This contribution will start from CLUE’s state-of-the-art in the CMS software reconstruction context, to then move to describe the enhancements needed for the algorithm to run on several detector geometries and for both the barrel and the forward region of the detector. The preliminary performance will also be presented for several types of high-granularity calorimeters proposed at linear and circular e+e− colliders.

        Speaker: Erica Brondolin (CERN)
      • 12:00
        An application of HEP track reconstruction methods to Gaia EDR3 25m

        Utilization of machine learning for pattern recognition and track reconstruction in HEP sets promising precedents of how novel data science tools can aid researchers in other fields of physics and astronomy in conducting statistical-inference on large datasets. We present our work in progress on the applications of fast nearest-neighbor search (kNN) to Gaia EDR3—one of the most extensive catalogs of astronomical objects and their properties, including their position and motion in the sky. Mapping positions of stars that are gravitationally bound to Milky Way (MW), but that have not originated in our galaxy could reveal crucial insights about its dark matter halo, which played a fundamental role in its formation. Motions of such stars are modeled differently from MW stars, which allows us to track them across the galaxy. "Tracking" in this context amounts to connecting stars that have a common origin based on their position and motion. The most literal analogy to HEP tracking is given by stellar streams, which are populations of stars that follow a distinct path in the sky, almost like a particle track, but this is not the only possibility. Parallel to the seeding stage of track reconstruction inside colliders, our method identifies potential regions of the galactic halo where stars from different populations may reside, based on their average angular motions over time. This enables us to generically identify any astronomical structure or clustering among stars with similar kinematics that stand out from a group of background objects; thus providing opportunities for our method to not be limited to identification of formally defined star clusters. We will present examples of known star clusters that our method successfully located and discuss the accuracy of their characterization, as well as the robustness of our algorithm given various algorithmic choices.

        Speaker: Mine Gokcen
    • 12:30 14:00
      Lunch Break 1h 30m Place du village (Le Village)

      Place du village

      Le Village

    • 14:00 15:35
      Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Vladimir Gligorov (Centre National de la Recherche Scientifique (FR))
      • 14:00
        Expected tracking performance of the ATLAS Inner Tracker Upgrade for Phase-II 30m

        With its increased number of proton-proton collisions per bunch crossing, track reconstruction at the High-Luminosity Large Hadron Collider (HL-LHC) is a complex endeavor. The Inner Tracker (ITk) is a silicon-only replacement of the current ATLAS Inner Detector as part of its Phase-II upgrade.
        It is specifically designed to handle the challenging conditions at the HL-LHC, resulting from greatly increased pile-up.

        On the path towards the increased luminosity starting in LHC Run 4, the critical milestone of unifying the ITk and LHC Run 3 reconstruction software releases has been completed. This allows deployment of the software-level improvements added for LHC Run 3. At the same time, improvements to the simulated description of the detector construction, readout and reconstruction of ITk have been implemented.

        With the state-of-the-art engineering description of ITk, the performance of the detector can be evaluated, leveraging the aforementioned improvements in simulation and reconstruction. This contribution will report on the updated performance of ITk tracking at high luminosities, which is increasingly representative of the actual expected reconstruction in LHC Run 4.

        At the same time, the ATLAS upgrade effort consists as well of a comprehensive software upgrade programme, whose goal is not only to achieve the ultimate physics performance, but at the same time to modernise the software technology, to make best use of upcoming and future processing technologies and ensure maintainability throughout the operation of the experiment. In order to achieve these objectives, the ATLAS Collaboration has decided to extensively use ACTS for the Phase-II reconstruction software.

        In this contribution, the current status of the ACTS integration for the ATLAS ITk track reconstruction is presented, with emphasis on the improvements of the track reconstruction software and the implementation of an ATLAS Phase-II EDM, interfaced with the ATLAS xAOD IO infrastructure.

        Speaker: Paul Gessinger (CERN)
      • 14:35
        Track reconstruction with mkFit and developments towards HL-LHC 25m

        MkFit is a Kalman filter-based track reconstruction algorithm that uses both thread- and data-level parallelism. It has been deployed in the Run-3 offline workflow of the CMS experiment. The CMS tracking performs a series of iterations to reconstruct tracks of increasing difficulty. MkFit has been adopted for several of these iterations, which contribute to the majority of reconstructed tracks. When tested in the standard conditions for production jobs, MkFit has been shown to speed up track pattern recognition by an average of 3.5x. This speedup is due to a number of factors, including vectorization, a lightweight geometry description, improved memory management, and single precision. Efficient vectorization is achieved with several compilers and relies on a dedicated library for small matrix operations, Matriplex, which has recently been released in a public repository. The mkFit geometry and material description has been generalized to support the Phase-2 upgraded tracker geometry for the HL-LHC and potentially other detector configurations. The implementation strategy and preliminary results with the HL-LHC geometry are presented. Speedups in track building from mkFit imply that track fitting becomes a comparably time consuming step of the tracking chain. Prospects for an mkFit implementation of the track fit are also discussed.

        Speaker: Slava Krutelyov (Univ. of California San Diego (US))
      • 15:05
        End-to-end Particle-flow Reconstruction Algorithm for Highly Granular Calorimeters 25m

        We present an end-to-end particle-flow reconstruction algorithm for highly granular calorimeters. Starting from calorimeter hits and reconstructed tracks the algorithm filters noise, separates showers, regresses their energy, provides an energy uncertainty estimate, and predicts the type of particle. The algorithm is trained on data from a simulated detector that matches the complexity of the CMS high-granularity calorimeter (HGCAL) for which it can be retrained in the future. Detector hits and reconstructed tracks are embedded in a dynamic graph. Information between graph nodes is then exchanged between neighbours weighted by their respective distance in a low dimensional latent space. The network is trained using the object condensation loss, a graph segmentation technique that allows to cluster an arbitrary number of showers in every event while simultaneously performing regression and classification tasks. We discuss the network's performance in terms of its shower reconstruction efficiency, its energy resolution and uncertainty estimate, as well as the accuracy of its particle identification. Additionally we discuss the model's jet reconstruction performance and evaluate the model's computational efficiency. To our knowledge this is the first implementation of an end-to-end particle-flow reconstruction algorithm aimed at highly granular calorimeters.

        Speaker: Mr Philipp Zehetner (Ludwig Maximilians Universitat (DE))
    • 15:35 15:45
      Poster Flash Talk Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Alexis Vallier (L2I Toulouse, CNRS/IN2P3, UT3)
      • 15:35
        Flash Talk: The Layer-1 Barrel Muon Filter for the Level-1 muon trigger upgrade of the CMS experiment at the HL-LHC 3m

        In view of the HL-LHC, the Phase-2 CMS upgrade will replace the entire trigger and data acquisition system. The detector readout electronics will be upgraded to allow a maximum L1 accept rate of 750 kHz, and a latency of 12.5 µs. The muon trigger is a multi-layer system that is designed to reconstruct muon stubs on each muon station and then to measure the momenta of the muon by correlating information across muon chambers on the so-called muon track finders and by matching the reconstructed stubs with the L1 tracker tracks sent by the track trigger. This is achieved with sophisticated pattern recognition algorithms that run on Virtex UltraScale+ FPGA processors. The Layer-1 Barrel Muon Filter is the second layer of this system, it concentrates the stubs and hits from
        the barrel muon stations and runs dedicated algorithms to refine and correlate the information of multiple chambers before sending the information to the track finders for momentum estimation. One of the proposed algorithms is meant to detect and identify muon showers allowing for tagging both hadronic showers in the muon system as well as highly-energetic muons that will be missed otherwise. We review the current status of such an algorithm, its demonstration in firmware and its measured physics performance.

        Speaker: Javier Prado Pico (Universidad de Oviedo (ES))
      • 15:38
        Flash Talk: Real-time long-lived particle reconstruction in LHCb’s HLT2 3m

        LHCb is optimised to study particles decaying a few millimetres from the primary vertex using tracks that traverse the length of the detector. Recently, extensive efforts have been undertaken to enable the study of long-lived particles decaying within the magnet region, up to 7.5 m from the interaction point. This approach presents several challenges, particularly when considering real-time analysis within LHCb’s trigger system. These include large track combinatorics, a tracker with a low magnetic field and short lever arm, as well as the need to extrapolate tracks through a strong, inhomogeneous magnetic field to find vertices. Several approaches have been developed to tackle these challenges in LHCb’s HLT2, including new geometry-based selections, MVA-based vertex finding and modifications to vertex fitting. This talk presents these developments and future prospects.

        Speaker: Izaac Sanderswood (Univ. of Valencia and CSIC (ES))
      • 15:41
        Flash Talk: Clustering and tracking in dense environments with the ITk 3m

        Dense hadronic environments encountered, for example, in the core of high-transverse-momentum jets, present specific challenges for the reconstruction of charged-particle trajectories (tracks) in the ATLAS tracking detectors, as they are characterised by a high density of ionising particles. The charge clusters left by these particles in the silicon sensors are more likely to merge with increasing particle densities, especially in the innermost layers of the ATLAS silicon-pixel detectors. This has detrimental effects on both the track reconstruction efficiency and the precision with which the track parameters can be measured. The new Inner Tracker (ITk), which will replace the ID for the High-Luminosity LHC programme, features an improved granularity due to its smaller pixel sensor size, which is expected to reduce cluster merging rates in dense environments. In this contribution, the cluster and track reconstruction performance in dense environments is studied with the most 23-00-03 ITk layout. Different quantities are studied to assess the effects of cluster merging at the cluster-, track-, and jet-level.

        Speaker: Nicola De Biase (Deutsches Elektronen-Synchrotron (DE))
    • 15:45 16:10
      Coffee Break 25m
    • 16:10 17:40
      Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: David Lange (Princeton University (US))
      • 16:10
        High-throughout GNN track reconstruction at LHCb 25m

        Over the next decade, increases in instantaneous luminosity and detector granularity will increase the amount of data that has to be analyzed by high-energy physics experiments, whether in real time or offline, by an order of magnitude. The reconstruction of charged
        particles, which has always been a crucial element of offline data processing pipelines, must increasingly be deployed from the very first stages of the real time processing to enable experiments to achieve their physics goals. Graph Neural Networks have received a great deal of attention in the community because their computational complexity scales linearly with the number of hits in the detector, unlike conventional algorithms which often scale quadratically or worse. We present a first implementation of the vertex detector reconstruction for the LHCb experiment using GNNs, and benchmark its computational performance in the context of LHCb's fully GPU-based first-level trigger system, Allen. As Allen performs charged particle reconstruction at the full LHC collision rate, over 20~MHz in the ongoing Run~3, each GPU card must process around one hundred thousand collisions per second. Our work is the first attempt to operate a GNN charged particle reconstruction in such a high-throughput environment using GPUs, and we discuss the pros and cons of the GNN and classical algorithms in a detailed like-for-like comparison.

        Speaker: Anthony Correia (Centre National de la Recherche Scientifique (FR))
      • 16:40
        Application of single-layer particle tracking for radiation field decomposition and interaction point reconstruction at MoEDAL 25m

        In particle physics experiments, hybrid pixel detectors are an integral part of the tracking systems closest to the interaction points. Utilising excellent spatial resolution and high radiation resilience, they are used for particle tracking via the “connecting the dots” method seen in layers of an onion-like structure. In the context of the Medipix Collaborations, a novel, complimentary approach to particle detection has been proposed. This approach relies on analysis of tracks seen in a pixel matrix. Characteristic track features are exploited for the identification of impinging particles, precise particle trajectory, or reaction kinematics reconstruction.

        The presented work will concentrates on hybrid silicon detectors of the Timepix3 type [1], which consist of a radiation-sensitive layer typically made of silicon with dimensions 1.408 × 1.408 × 0.05 cm. The active material is then bump bonded to a readout chip at 256 × 256 points with a pitch of 55 μm. Using these detectors, novel algorithms for particle fluence, particle identification, and particle trajectory reconstruction (tracking) are developed mainly for single-layer detectors. These new algorithms were trained and extensively tested with simulation data, then verified with real-world data sets of known particle composition, outperforming state-of-the-art [2,3] in terms of accuracy and stability. In particular, a significant improvement in the tracking resolution was achieved. The capability of proton spectrum determination in compact single-layer detectors was subsequently demonstrated at a hadron therapy facility and using data acquired by the Space Application Timepix Radiation Monitor (SATRAM) in the inner Van-Allen radiation belt.

        As a high energy physics application, the methods were applied to data taken at the MoEDAL-MAPP (Monopoles and Exotics Detector At LHCb) [4,5] experiment at the Large Hadron Collider located in CERN, where Timepix3 is at a distance of 1 m from the interaction point. The improved particle tracking algorithms and an unobstructed view allow for the determination of the particle trajectories arising at the point of collisions of opposing beams at the Large Hadron Collider. The improved resolution permits a quantification of structural properties of the field, showing clear variation during lead-lead and proton-proton collision periods (Figure 1). Similarly, the point of interaction and background sources were isolated separately, allowing for the individual classification of the spectral properties of both field contributions.
        Measured field directionality structure during lead-lead (left) and proton-proton (right) collision periods at MoEDAL using a Random Forrest Regressor combined with particle track line fit methods

        [1] Poikela, T., et al. "Timepix3: a 65K channel hybrid pixel readout chip with simultaneous ToA/ToT and sparse readout." Journal of instrumentation 9.05 (2014): C05013.

        [2] Bergmann, B., et al. "Particle tracking and radiation field characterization with Timepix3 in ATLAS." Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 978 (2020): 164401.

        [3] P. Mánek et al. “Improved algorithms for determination of particle directions with Timepix3”. In: Journal of Instrumentation 17.01 (Jan. 2022), p. C01062. URL: https://dx.doi.org/10.1088/1748-0221/17/01/C01062

        [4] M. Fairbairn and J. L. Pinfold, “MoEDAL – a new light on the high-energy frontier”, Contemporary Physics, 58:1, pp. 1-24 (2017). doi: 10.1080/00107514.2016.1222649

        [5] J. Pinfold (MoEDAL collaboration), “MoEDAL-MAPP – an LHC Dedicated Detector Search Facility” URL: https://arxiv.org/abs/2209.03988

        Speakers: Declan Garvey (Institute of Technical and Experimental Physics CTU in Prague), Mr Declan Garvey (Institute of Experimental and Applied Physics, Czech Technical University in Prague)
      • 17:10
        A Multipurpose Graph Neural Network for Reconstruction in LArTPC Detectors 25m

        The Exa.TrkX Graph Neural Network (GNN) for reconstruction of liquid argon time projection chamber (LArTPC) data is a message-passing attention network over a heterogeneous graph structure, with separate subgraphs of 2D nodes (hits in each plane) connected across planes via 3D nodes (space points). The model provides a consistent description of the neutrino interaction across all planes.

        The GNN initially performed a semantic segmentation task, classifying detector hits according to the particle type that produced them. Performance results will be presented based on publicly available samples from MicroBooNE. These include both physics performance metrics, achieving ~95% accuracy when integrated over all particle classes, and computational metrics for training and for inference on CPU or GPU.

        We will also present recent work extending the network application to additional LArTPC reconstruction tasks, such as cosmic background and noise filtering, interaction vertex position identification, and particle instance segmentation. Early results indicate that the network achieves excellent filtering performance without increasing the network size, thus demonstrating that the set of learned features are somewhat general and relevant for multiple tasks.

        Prospects for the integration of the network inference in the data processing chains of LArTPC experiments will also be presented.

        Speaker: Giuseppe Cerati (Fermi National Accelerator Lab. (US))
    • 17:40 17:50
      Poster Flash Talk Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Alexis Vallier (L2I Toulouse, CNRS/IN2P3, UT3)
      • 17:40
        Flash Talk: Heterogeneity in Graph Neural Networks for Track Reconstruction in the ATLAS Upgrade ITk Detector 3m

        The upcoming High Luminosity phase of the Large Hadron Collider (HL-LHC) represents a steep increase in pileup rate ($\left\langle\mu \right\rangle = 200$) and computing resources for offline reconstruction of the ATLAS Inner Tracker (ITk), for which graph neural networks (GNNs) have been demonstrated as a promising solution. The GNN4ITk pipeline has successfully employed a GNN architecture for edge classification and showed the competitiveness of this approach in HL-LHC-like pile-up conditions. We present in this study a new heterogeneous GNN architecture that handles the pixel- and strip-subdetectors using separate graph subnetworks, reflecting the naturally differing geometries of these regions. We investigate the impact of varying the degree of heterogeneity in the model and identify the optimal complexity associated with a heterogeneous architecture. In addition, we examine the use of the underlying hit cluster information in model training and demonstrate that cluster-level input is richer and more discriminatory than space point coordinates alone. With this added sophistication, the track reconstruction efficiency and fake rate of the GNN4ITk pipeline using the heterogeneous GNN compares favourably with the results from the homogeneous GNN.

        Speaker: Mr Minh-Tuan Pham (University of Wisconsin Madison (US))
      • 17:43
        Flash Talk: Flexible Hough Transform FPGA Implementation for the ATLAS Event Filter 3m

        The ATLAS Run3 will conclude as planned in late 2025 and will be followed by the so-called Long Shutdown 3. During this period all the activities exclusively dedicated to Run4 will converge on the closing of the prototyping development and in the start of the production and integration, to reach the data collection in 2029. These upgrades are principally led by the increase of the peak of luminosity up to 5-7.5 x 1034 cm-2 s-1 of LHC expected for the future High Luminosity-LHC operations. One of the major changes ATLAS will face will be an increase in the amount of data to be managed in its Trigger and Data Acquisition system. The triggering operations are targeting to reach from 40 MHz of input event a discrimination in the middle throughput of 1 MHz and finally in the last stage of 10 kHz. The second process will use all the information acquired by the ATLAS detector to complete the event selection, including the study of the tracks of the innermost sub-detector, the Inner Tracker. This tracking operation is planned to be performed with a PC farm because of the need for high precision. The list of the architectures under study to speed-up this process includes the use of a “hardware accelerator” farm, an infrastructure made of interconnected accelerators such as GPUs and FPGAs to speed up the tracking processes. The project described here is a proposal for a tuned Hough Transform algorithm implementation on high-end FPGA technology, versatile to adapt to different tracking situations. The development platform allows the study of different datasets from an ATHENA software simulating the firmware. AMD-Xilinx FPGA has been chosen to test and evaluate this implementation. The system was tested with a ATLAS realistic environment. Simulated 200 pile up events have been exploited to measure the effectiveness of the algorithm. The processing time is averagely in the order of < 10 μs according to internal preliminary estimates, with the possibility to run two events at a time per algorithm instance. Internal efficiency tests have shown conditions that reach > 95 % of the track-finding performance for single muon tracking.

        Speaker: Fabrizio Alfonsi (Universita e INFN, Bologna (IT))
      • 17:46
        Flash Talk: Obtaining requirements for the future ATLAS Event Filter Tracking system 3m

        The High-Luminosity LHC shall be able to provide a maximum peak luminosity of 5 × $10^{34}$ cm$^{−2}$s$^{−1}$, corresponding to an average of 140 simultaneous p-p interactions per bunch crossing (pile-up), at the start of Run 4, around 2028. The ATLAS experiment will go through major changes to adapt to the high-luminosity environment, in particular in the DAQ architecture and in the trigger selections. The use of the new high-resolution full-silicon inner tracker (ITk) in the high-level-trigger (also called Event Filter) is of paramount importance to improve the trigger selection and purity and reduce the trigger rates against the large background of low-energy pile-up jets. The Event Filter Tracking system is under design as an heterogeneous and flexible system, able to combine algorithms running in CPUs and on accelerators, like FPGAs and/or GPUs, on commodity servers, to allow both a regional reconstruction at the expected 1MHz L1 rate and the full event reconstruction at 150 kHz. The challenge of the Event Filter Tracking design is to maximize the tracking performance of the algorithms while maintaining an adequate data throughput through the processors farm, using reasonable power. In this presentation, the results of studies performed to evaluate the minimal tracking performance required are presented. In particular, the tracking efficiency and the resolution on the track transverse momentum will impact the leptons selections, while the resolution on the track impact parameters will affect the hadronic selections, including b-tagging and multi-jet selections.

        Speakers: Gregory Penn, Gregory Penn
    • 19:00 21:00
      Poster: Poster Session & Welcome Cocktail Place du village (Le Village)

      Place du village

      Le Village

      • 19:00
        The Layer-1 Barrel Muon Filter for the Level-1 muon trigger upgrade of the CMS experiment at the HL-LHC 3m

        In view of the HL-LHC, the Phase-2 CMS upgrade will replace the entire trigger and data acquisition system. The detector readout electronics will be upgraded to allow a maximum L1 accept rate of 750 kHz, and a latency of 12.5 µs. The muon trigger is a multi-layer system that is designed to reconstruct muon stubs on each muon station and then to measure the momenta of the muon by correlating information across muon chambers on the so-called muon track finders and by matching the reconstructed stubs with the L1 tracker tracks sent by the track trigger. This is achieved with sophisticated pattern recognition algorithms that run on Virtex UltraScale+ FPGA processors. The Layer-1 Barrel Muon Filter is the second layer of this system, it concentrates the stubs and hits from
        the barrel muon stations and runs dedicated algorithms to refine and correlate the information of multiple chambers before sending the information to the track finders for momentum estimation. One of the proposed algorithms is meant to detect and identify muon showers allowing for tagging both hadronic showers in the muon system as well as highly-energetic muons that will be missed otherwise. We review the current status of such an algorithm, its demonstration in firmware and its measured physics performance.

        Speaker: Javier Prado Pico (Universidad de Oviedo (ES))
      • 19:03
        Flexible Hough Transform FPGA Implementation for the ATLAS Event Filter 3m

        The ATLAS Run3 will conclude as planned in late 2025 and will be followed by the so-called Long Shutdown 3. During this period all the activities exclusively dedicated to Run4 will converge on the closing of the prototyping development and in the start of the production and integration, to reach the data collection in 2029. These upgrades are principally led by the increase of the peak of luminosity up to 5-7.5 x 1034 cm-2 s-1 of LHC expected for the future High Luminosity-LHC operations. One of the major changes ATLAS will face will be an increase in the amount of data to be managed in its Trigger and Data Acquisition system. The triggering operations are targeting to reach from 40 MHz of input event a discrimination in the middle throughput of 1 MHz and finally in the last stage of 10 kHz. The second process will use all the information acquired by the ATLAS detector to complete the event selection, including the study of the tracks of the innermost sub-detector, the Inner Tracker. This tracking operation is planned to be performed with a PC farm because of the need for high precision. The list of the architectures under study to speed-up this process includes the use of a “hardware accelerator” farm, an infrastructure made of interconnected accelerators such as GPUs and FPGAs to speed up the tracking processes. The project described here is a proposal for a tuned Hough Transform algorithm implementation on high-end FPGA technology, versatile to adapt to different tracking situations. The development platform allows the study of different datasets from an ATHENA software simulating the firmware. AMD-Xilinx FPGA has been chosen to test and evaluate this implementation. The system was tested with a ATLAS realistic environment. Simulated 200 pile up events have been exploited to measure the effectiveness of the algorithm. The processing time is averagely in the order of < 10 μs according to internal preliminary estimates, with the possibility to run two events at a time per algorithm instance. Internal efficiency tests have shown conditions that reach > 95 % of the track-finding performance for single muon tracking.

        Speaker: Fabrizio Alfonsi (Universita e INFN, Bologna (IT))
      • 19:06
        Obtaining requirements for the future ATLAS Event Filter Tracking system 3m

        The High-Luminosity LHC shall be able to provide a maximum peak luminosity of 5 × $10^{34}$ cm$^{−2}$s$^{−1}$, corresponding to an average of 140 simultaneous p-p interactions per bunch crossing (pile-up), at the start of Run 4, around 2028. The ATLAS experiment will go through major changes to adapt to the high-luminosity environment, in particular in the DAQ architecture and in the trigger selections. The use of the new high-resolution full-silicon inner tracker (ITk) in the high-level-trigger (also called Event Filter) is of paramount importance to improve the trigger selection and purity and reduce the trigger rates against the large background of low-energy pile-up jets. The Event Filter Tracking system is under design as an heterogeneous and flexible system, able to combine algorithms running in CPUs and on accelerators, like FPGAs and/or GPUs, on commodity servers, to allow both a regional reconstruction at the expected 1MHz L1 rate and the full event reconstruction at 150 kHz. The challenge of the Event Filter Tracking design is to maximize the tracking performance of the algorithms while maintaining an adequate data throughput through the processors farm, using reasonable power. In this presentation, the results of studies performed to evaluate the minimal tracking performance required are presented. In particular, the tracking efficiency and the resolution on the track transverse momentum will impact the leptons selections, while the resolution on the track impact parameters will affect the hadronic selections, including b-tagging and multi-jet selections.

        Speakers: Gregory Penn, Gregory Penn
      • 19:09
        Real-time long-lived particle reconstruction in LHCb’s HLT2 3m

        LHCb is optimised to study particles decaying a few millimetres from the primary vertex using tracks that traverse the length of the detector. Recently, extensive efforts have been undertaken to enable the study of long-lived particles decaying within the magnet region, up to 7.5 m from the interaction point. This approach presents several challenges, particularly when considering real-time analysis within LHCb’s trigger system. These include large track combinatorics, a tracker with a low magnetic field and short lever arm, as well as the need to extrapolate tracks through a strong, inhomogeneous magnetic field to find vertices. Several approaches have been developed to tackle these challenges in LHCb’s HLT2, including new geometry-based selections, MVA-based vertex finding and modifications to vertex fitting. This talk presents these developments and future prospects.

        Speaker: Izaac Sanderswood (Univ. of Valencia and CSIC (ES))
      • 19:12
        Seeding with Machine Learning in ACTS 3m

        To prepare for the High Luminosity phase of the Large Hadron Collider at CERN (HL-LHC), the ATLAS experiment is replacing its innermost components with a full-silicon tracker (ITk), to improve the spatial resolution of the tracks measurements and increase the data readout rate. However, this upgrade alone will not be sufficient to cope with the tremendous increase of luminosity, and significant improvements have to be incorporated into the existing tracking software to keep the required computing resources at a realistic level.

        In this poster, we are focusing on the track seeds reconstruction within the ITk detector, and we explore the possibility to use hashing techniques to improve the seed reconstruction efficiency, limit the combinatorics and eventually reduce the computing time. Metric learning is then used to tune our algorithm for the different regions of the detector, and to increase the robustness against time-dependent detector conditions.

        The code developments are done within the ACTS framework, an experiment-independent toolkit for charged particles track reconstruction.

        Speaker: Jeremy Couthures (Centre National de la Recherche Scientifique (FR))
      • 19:15
        Improvement of event-building for data-driven hybrid pixel detector data 3m

        Hybrid pixel detectors like Timepix3 and Timepix4 detect individual pixels hit by particles. For further analysis, individual hits from such sensors need to be grouped into spatially and temporally coinciding groups called clusters. While state-of-the-art Timepix3 detectors generate up to 80 Mio hits per second, the next generation, Timepix4, will provide data rates of up to 640 Mio hits (data bandwidth of up to 164 Gbps), which is far beyond the current capabilities of the real-time clustering algorithms, processing at roughly 3 MHits/s. We explore the options for accelerating the clustering process, focusing on its real-time application. We developed a tool that utilizes multicore CPUs to speed up the clustering. Despite the interdependence of different data subsets, we achieve a speed-up scaling with the number of used cores. Further, we exploited options to reduce the computational demands of the clustering by determining radiation field parameters from raw (unclustered) data features and self-initiating further clustering if these data show signs of interesting events. This further accelerates the clustering while also reducing storage space requirements. The proposed methods were validated and benchmarked using real-world and simulated datasets.

        Speaker: Tomas Celko (Czech Technical University in Prague (CZ))
      • 19:18
        Graph Neural Network-based Tracking as a Service 3m

        Recent studies have shown promising results for track finding in dense environments using Graph Neural Network (GNN)-based algorithms. These algorithms not only provide high track efficiency but also offer reasonable track resolutions. However, GNN-based track finding is computationally slow on CPUs, necessitating the use of coprocessors like GPUs to accelerate the inference time. Additionally, due to the substantial graph size typically involved (consisting of approximately 300k nodes and 1M edges), significant GPU memory is required to ensure efficient computation. Not all computing facilities used for particle physics experiments are equipped with high-end GPUs such as NVIDIA A100s or V100s, which can meet the computational requirements. These computing challenges must be addressed in order to deploy GNN-based track finding into production. We propose addressing these challenges by establishing the GNN-based track finding algorithm as a service hosted either in the cloud or high-performance computing centers.

        In this talk, we will describe the implementation of the GNN-based track finding workflow as a service using the Nvidia Triton Inference Server. The pipeline contains three discrete deep-learning models and two CUDA-based algorithms. Because of the heterogeneity in the workflow, we explore different server configurations? to maximize the throughput of track finding and the GPU utilization. We also study the scalability of the inference server using the Perlmutter supercomputer at NERSC and cloud resources like AWS and Google Cloud.

        Speaker: Xiangyang Ju (Lawrence Berkeley National Lab. (US))
      • 19:21
        Heterogeneity in Graph Neural Networks for Track Reconstruction in the ATLAS Upgrade ITk Detector 3m

        The upcoming High Luminosity phase of the Large Hadron Collider (HL-LHC) represents a steep increase in pileup rate ($\left\langle\mu \right\rangle = 200$) and computing resources for offline reconstruction of the ATLAS Inner Tracker (ITk), for which graph neural networks (GNNs) have been demonstrated as a promising solution. The GNN4ITk pipeline has successfully employed a GNN architecture for edge classification and showed the competitiveness of this approach in HL-LHC-like pile-up conditions. We present in this study a new heterogeneous GNN architecture that handles the pixel- and strip-subdetectors using separate graph subnetworks, reflecting the naturally differing geometries of these regions. We investigate the impact of varying the degree of heterogeneity in the model and identify the optimal complexity associated with a heterogeneous architecture. In addition, we examine the use of the underlying hit cluster information in model training and demonstrate that cluster-level input is richer and more discriminatory than space point coordinates alone. With this added sophistication, the track reconstruction efficiency and fake rate of the GNN4ITk pipeline using the heterogeneous GNN compares favourably with the results from the homogeneous GNN.

        Speaker: Mr Minh-Tuan Pham (University of Wisconsin Madison (US))
      • 19:24
        Clustering and tracking in dense environments with the ITk 20m

        Dense hadronic environments encountered, for example, in the core of high-transverse-momentum jets, present specific challenges for the reconstruction of charged-particle trajectories (tracks) in the ATLAS tracking detectors, as they are characterised by a high density of ionising particles. The charge clusters left by these particles in the silicon sensors are more likely to merge with increasing particle densities, especially in the innermost layers of the ATLAS silicon-pixel detectors. This has detrimental effects on both the track reconstruction efficiency and the precision with which the track parameters can be measured. The new Inner Tracker (ITk), which will replace the ID for the High-Luminosity LHC programme, features an improved granularity due to its smaller pixel sensor size, which is expected to reduce cluster merging rates in dense environments. In this contribution, the cluster and track reconstruction performance in dense environments is studied with the most 23-00-03 ITk layout. Different quantities are studied to assess the effects of cluster merging at the cluster-, track-, and jet-level.

        Speaker: Nicola De Biase (Deutsches Elektronen-Synchrotron (DE))
    • 09:00 10:30
      Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Paolo Calafiura (Lawrence Berkeley National Lab. (US))
      • 09:00
        First experiences with the LHCb heterogeneous software trigger 25m

        Since 2022, the LHCb detector is taking data with a full software trigger at the LHC proton proton collision rate, implemented in GPUs in the first stage and CPUs in the second stage. This setup allows to perform the alignment & calibration online and to perform physics analyses directly on the output of the online reconstruction, following the real-time analysis paradigm.
        This talk will discuss challenges of the heterogeneous trigger implementations, the first running experiences, and show preliminary performance results of both stages of the trigger system.

        Speaker: Andy Morris (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France)
      • 09:30
        Downstream: a new algorithm at LHCb to reconstruct Long-Lived particles in the first level of the trigger. 25m

        Long-lived particles (LLPs) are present in the SM and in many new physics scenarios beyond it but they are very challenging to reconstruct at LHC due to their very displaced vertices. A new algorithm, called "Downstream", has been developed at LHCb which is able to reconstruct and select LLPs in real time at the first level of the trigger (HLT1). It is executed on GPUs inside the Allen framework and in addition to an optimized strategy, it uses a Neural Network (NN) implementation to increase the track efficiency and reduce the ghost rates, with very high throughput and limited time budget. Besides serving to calibrate and align the detectors with Ks and L0 particles, the Downstream algorithm will largely increase the LHCb physics potential during the Run3.

        Speaker: Arantza De Oyanguren Campos (Univ. of Valencia and CSIC (ES))
      • 10:00
        The performance of the ATLAS Inner Detector tracking trigger, including new long-lived particle triggers, in high pileup collisions at 13.6 TeV at the Large Hadron Collider in Run-3 25m

        The performance of the Inner Detector tracking trigger of the ATLAS experiment at
        the Large Hadron Colloder (LHC) is evaluated for the data taken for LHC Run-3 during 2022.
        Included are results from the evolved standard trigger track reconstruction, and from new
        unconventional tracking strategies used in the trigger for the first time in Run-3.
        From Run-3, the application of Inner Detector tracking in the trigger has been significantly
        expanded, in particular full-detector tracking is utilized for hadronic signatures
        (such as jets and missing transverse energy triggers) for the first time. To meet computing
        resource limitations, several new features, including machine-learning based track seeding,
        have been developed and are discussed, together with many additional improvements with respect
        to the trigger tracking used in LHC Run-2. The Large Hadron LHC, as the world’s highest energy
        particle accelerator, provides a unique opportunity for directly searching for new physics
        Beyond the Standard Model (BSM). Massive long-lived particles (LLPs), which are absent in the
        Standard Model, can occur in many well-motivated theories of physics BSM. These new massive
        LLPs can decay into other particles far from the LHC interaction region, resulting in unusual
        experimental signatures and hence requiring customised and complex experimental techniques for
        their identification. Prior to Run-3, the ATLAS trigger did not include dedicated tracking
        triggers for the explicit identification of massive LLPs decaying in the inner tracking
        detectors. To enhance the sensitivity of searches, a series of new triggers customised for
        various unconventional tracking signatures, such as "displaced" tracks, and short tracks which
        "disappear" within the tracking detector, have been developed for Run-3 data taking, starting
        from 2022. The high performance of the inner detector trigger remains essential for the ATLAS
        physics programs in the Run-3 data, in particular for the many precision measurements of the
        Standard Model and now, in the searches for new physics. For the first time, the development
        and performance of these new triggers for the 2022 data taking is presented, together with the
        that from standard tracking.

        Speaker: Jonathan Long (Univ. Illinois at Urbana Champaign (US))
    • 10:30 11:00
      Coffee Break & Conference Photo 30m Place du village (Le Village)

      Place du village

      Le Village

    • 11:00 12:20
      YSF Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Giuseppe Cerati (Fermi National Accelerator Lab. (US))
      • 11:00
        On-the-fly measurement calibration with ACTS 15m

        Kalman Filter (KF)-based tracking algorithms are used by many collider experiments to reconstruct charged-particle trajectories with great performance. The input to such algorithms are usually point estimates of a particle's crossing on a detector's sensitive elements, known as measurements. For instance, in a pixel detector, connected component analysis is typically used to yield two-dimensional pixel clusters on which shape analysis is performed to obtain a position estimate. Such estimates can usually be made more precise if some information about the the fitted track's direction is available. Kalman Filter-based pipelines can thus readily benefit from on-the-fly measurement calibration, since the KF always makes a prediction of the current track state, which includes track angles, before incorporating each measurement. Measurement calibration can also be used to correct for detector effects such as wire sagging or module deformation, and may also be used to improve convergence when performing track finding and fitting with misaligned detector geometries. All of these calibrations are well suited to machine learning applications.

        ACTS is an experiment-independent toolkit for charge particle tracking which includes implementations of Kalman Filter track finding and fitting algorithms. This contribution will focus on the measurement calibration infrastructure implemented in ACTS and will present results from actual applications of realistic measurement calibration methods, from simple scale-and-offset schemes to sophisticated neural network-based techniques.

        Speaker: Louis-Guillaume Gagnon (University of California Berkeley (US))
      • 11:20
        Reconstruction performance with ACTS and the Open Data Detector 15m

        Over the last years, the ACTS software has matured in functionality and performance while at the same time the Open Data Detector (ODD), a revision and evolution of the TrackML detector, has been established. Together they form a foundation for algorithmic research and performance evaluation also for detectors with time measurements, like the ODD. In this contribution we present the performance for reference physics samples as a baseline for a reconstruction chain implemented with ACTS for silicon based detectors. This serves as a validation for both the ODD geometry and the ACTS reconstruction algorithms. At the same time it is a reference for experiments looking into ACTS reconstruction performance. Additionally, we use it to validate the ACTS intrinsic fast track simulation (ActsFatras) and present a coherent continuous integration testing suite to monitor performance changes over time.

        Speaker: Andreas Stefl (Technische Universitaet Wien (AT))
      • 11:40
        A Generalist Model for Particle Tracking 15m

        The application of deep learning models in particle tracking is pervasive. Graph Neural Networks are applied in track finding, Deep learning models in resolving merged tracks, Transformers in jet flavor tagging, and GravNet or its variations in one-short track finding. The current practice is to design one deep learning model for one task. However, these tasks are so deeply intertwined that factorizing them will inevitably lose information and hurt overall performance. We propose to design an intermediate generalist model that offers learned detector encodings for various particle tracking tasks.

        Inspired by the BERT model, which is the pre-training of deep bidirectional transformers for language understanding, we propose to train deep bidirectional transformers to encode the detector modules for particle tracking. Similarly, we define two surrogate tasks for the training. One task is to predict masked hits in a particle track, and the other is to predict if track A has higher momentum than track B. The goal is to obtain novel representations of detector modules and to use those representations for various downstream tasks, including outlier/hole detection and track generation.

        In this talk, we will present the preliminary results of training the BERT model for particle tracking and show the first application of the novel detector module representations for hole detection and track extrapolation. This study can be potentially extended to encode the whole particle detectors, including calorimeters and muon spectrometers, for more downstream particle reconstruction tasks.

        Speaker: Xiangyang Ju (Lawrence Berkeley National Lab. (US))
      • 12:00
        Developing Novel Track Reconstruction Algorithms for the Mu2e Experiment 15m

        The Mu2e experiment plans to search for neutrinoless muon to electron conversion in the field of a nucleus. Such a process violates lepton flavor conservation. To perform this search, a muon beam is focused on an aluminum target, the muons are stopped in the field of the aluminum nucleus, and electrons emitted from subsequent muon decays in orbit are measured. The endpoint energy for this process is 104.97 MeV; an excess of measured electrons at this energy signifies neutrinoless muon to electron conversion has occurred. Currently under construction at the Fermilab Muon Campus, Mu2e will stop $10^{18}$ muons on target in 3 years of running, with the goal of reaching a single event sensitivity of $3 \times 10^{-17}$ on the branching ratio. In order to reach such a sensitivity, one must write software that efficiently reconstructs the tracks of conversion electrons that pass through the Mu2e tracker. This has been achieved by breaking the reconstruction process down into four successive steps: hit reconstruction, time clustering, helix finding, and a final track fitting. One shortcoming of the current code is that the time clustering and helix finding stages make various assumptions that make them highly tuned to conversion electrons at the endpoint energy. This limits the collaboration’s ability to constrain some backgrounds, and search for a larger range of physics. The work presented here details the development of novel time clustering and helix finding algorithms, and how they fit into the Mu2e trigger system.

        Speaker: Matthew Stortini
    • 12:20 14:00
      Lunch Break 1h 40m Place du village (Le Village)

      Place du village

      Le Village

    • 14:00 15:30
      Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Salvador Marti I Garcia (IFIC-Valencia (UV/EG-CSIC))
      • 14:00
        Seed finding in the Acts Software Package: Algorithms and Optimizations 25m

        Seed finding is an important and computationally expensive problem in the reconstruction of charged particle tracks; finding solutions to this problem involves forming triples (seeds) of discrete points at which particles were detected (spacepoints) in the detector volume. This combinatorial process scales cubically with the number of spacepoints, which in turn is expected to increase in future collision experiments as well as in upgrades to current experiments such as the HL-LHC (High-Luminosity Large Hadron Collider). The Acts (A Common Tracking Software) software package provides a broad range of algorithms – including seeding – for the reconstruction of charge particle tracks in a broad range of detectors. In order to provide competitive performance – in terms of computation as well as physics – for future experiments, the Acts software provides highly optimized seed finding algorithms which can be configured for different detector geometries. In this talk, we describe the seeding algorithms in traccc which reduce the combinatorial explosion problem through the use of structured grids and k -dimensional search trees. We compare the performance of these algorithms in CPU- and GPU-based environments. Finally, we discuss strategies for reducing the volume of output seeds – which impacts the performance of other algorithms such as combinatorial Kalman filtering – such as seed filtering and seed merging. In particular, we propose to combine a clustering algorithm – such as DBSCAN – and a neural network with Margin Ranking Loss for an efficient and performant seed selection.

        Speakers: Luis Falda Coelho (CERN), Corentin Allaire (IJCLab, Université Paris-Saclay, CNRS/IN2P3)
      • 14:30
        Study of a new algorithm for tracker alignment using Machine Learning 25m

        For the tracker systems used in experiments like the large LHC experiments, a track based alignment with offline software is performed. The standard approach involves minimising the residuals between the measured and track-predicted hits using the $\chi^2$ method. However, this minimisation process involves solving a complex and computationally expensive linearised matrix equation. A new approach utilising modern Machine Learning frameworks such as TensorFlow and/or PyTorch is being studied. In this study, the problem is addressed by leveraging these frameworks' implemented stochastic gradient descent and backpropagation algorithms to minimise the $\chi^2$ as the cost function. A proof-of-principle example with a generic detector setup is presented.

        Speaker: Mr David Brunner (Stockholm University (SE))
      • 15:00
        Simultaneous multi-vertex reconstruction with a minimum-cost lifted multicut graph partitioning algorithm 25m

        Particle physics experiments often require the simultaneous reconstruction of many interaction vertices. This task is complicated by track reconstruction errors which frequently are bigger than the typical vertex-vertex distances in physics problems. Usually, the vertex finding problem is solved by ad hoc heuristic algorithms. We propose a universal approach to address the multiple vertex finding in a dense environment through a principled formulation as a minimum-cost lifted multicut problem. The suggested algorithm is tested in a typical LHC environment with multiple pileup vertices produced by proton–proton interactions. The amount of these vertices and their significant density in the beam interaction region make this case a challenging testbed for the vertex-finding algorithms. To assess the vertexing performance in a dense environment with significant track reconstruction errors several dedicated metrics are proposed. We demonstrate that the minimum-cost lifted multi-cut approach outperforms heuristic algorithms and works well up to the highest pileup vertex multiplicity expected at the HL-LHC.

        Speaker: Vadim Kostyukhin (Universitaet Siegen (DE))
    • 15:30 16:00
      Coffee Break 30m Place du village (Le Village)

      Place du village

      Le Village

    • 16:00 17:50
      Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Markus Elsing (CERN)
      • 16:00
        Physics Performance of the ATLAS GNN4ITk Track Reconstruction Chain 25m

        Applying graph-based techniques, and graph neural networks (GNNs) in particular, has been shown to be a promising solution [1-3] to the high-occupancy track reconstruction problems posed by the upcoming HL-LHC era. Simulations of this environment present noisy, heterogeneous and ambiguous data, which previous GNN-based algorithms for ATLAS ITk track reconstruction could not handle natively. We present a range of upgrades to the GNN4ITk pipeline that allow detector regions to be handled heterogeneously, ambiguous and shared nodes to be reconstructed more rigorously, and tracks-of-interest to be treated with more importance in training 4. With these improvements, we are able to present detailed direct comparisons with existing reconstruction algorithms on a range of physics metrics, including reconstruction efficiency across particle type and pileup condition, jet reconstruction performance in dense environments, displaced tracking, and track parameter resolutions. By integrating this solution within the offline ATLAS Athena framework, we also explore a range of reconstruction chain configurations, for example by using the GNN4ITk pipeline to build regions-of-interest while using traditional techniques for track cleaning and fitting.

        1 EPJ Web Conf. 251 (2021) 03047
        2 Eur. Phys. J. C 81 (2021) 876
        3 ATL-ITK-PROC-2022-006
        4 IDTR-2023-01

        Speaker: Heberth Torres (L2I Toulouse, CNRS/IN2P3, UT3)
      • 16:30
        Neural-Network-Based Event Reconstruction for the RadMap Telescope 25m

        Detailed knowledge of the radiation environment in space is an indispensable prerequisite of any space mission in low Earth orbit or beyond. The RadMap Telescope is a compact multi-purpose radiation detector that provides near real-time monitoring of the radiation aboard crewed and uncrewed spacecrafts. A first prototype is currently deployed on the International Space Station for an in-orbit demonstration of the instrument’s capabilities.
        RadMap’s main sensor consists of a stack of scintillating-plastic fibres coupled to silicon photomultipliers. The perpendicular alignment of fibres in the stack allows the three-dimensional tracking of charged particles as well as the identification of cosmic ray ions by reconstruction of their energy-loss profiles. We implemented artificial neural networks trained on simulated detector data in the instrument’s flight computer to reconstruct the tracks from raw detector data in real time and determine the particles’ types and energies without requiring the transmission of raw data to Earth for offline reconstruction.
        In this contribution, we will describe our neural-network-based reconstruction methods and the results achieved for the different reconstruction tasks. We show the performance of different network architectures constituted of fully-connected and convolutional layers and present early results using transformer networks that further improve the reconstruction performance of RadMap.

        Speaker: Luise Meyer-Hetling (Technical University of Munich)
      • 17:00
        FASER tracking system and performance 25m

        FASER, the ForwArd Search ExpeRiment, is an LHC experiment located 480 m downstream of the ATLAS interaction point along the beam collision axis. FASER is designed to detect TeV-energy neutrinos and search for new light weakly-interacting particles produced in the pp collision at the LHC. FASER has been taking collision data since the start of LHC Run3 in July 2022. The first physics results were just presented in March 2023 [1,2], including the first direct observation of collider neutrinos. FASER includes four identical tracker stations constructed from silicon microstrip detectors, which play a key role in the physics analysis. Specifically the tracker stations are designed to separately reconstruct the pair of charged particles arising from the new particle, as well as high-energy muons from the neutrino interactions. This talk will present the performance study for track reconstruction and detector alignment using the first collision data.

        [1] https://arxiv.org/abs/2303.14185
        [2] https://cds.cern.ch/record/2853210

        Speaker: Ke Li (University of Washington (US))
    • 19:30 00:00
      Social Dinner 4h 30m Flashback café

      Flashback café

      5 All. de Brienne, 31000 Toulouse

      https://maps.app.goo.gl/QUqqxbgH1y3rGKqR7

    • 09:00 10:30
      Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Alexis Vallier (L2I Toulouse, CNRS/IN2P3, UT3)
      • 09:00
        Belle II track finding and hit filtering using precise timing information 25m

        The SuperKEKB accelerator and the Belle II experiment constitute the second-generation asymmetric energy B-factory. SuperKEKB has recently set a new world record in instantaneous luminosity, which is anticipated to further increase during the upcoming run periods up to $6 \times 10^{35} cm^{-2}s^{-1}$. An increase in luminosity is challenging for the track finding as it comes at the cost of a significant increase of the number of background hits. The Belle II experiment aims at testing the Standard Model of particle physics and searching for new physics by performing precision measurements. To achieve these physics goals, including e.g. time-dependent measurements, the track finding and fitting has to deliver tracks with high precision and efficiency. As the track reconstruction is part of the online high level trigger system of Belle II there are also stringent requirements on the resource usage.

        The Belle II tracking system consists of 2 layers of pixelated silicon detectors, 4 layers of double sided silicon strip detectors (SVD), and the central drift chamber. We will present the general performance and working of the track reconstruction algorithm of Belle II. In particular we will focus on the usage of hit time information from the silicon strip detector. The SVD has a very precise determination of the hit time, which will be used for the first time in the Belle II track finding in the next data taking period. These hit times are used for hit filtering, estimation of the time of collision, and the determination of the time of individual tracks. All of these are important tools to help to cope with the anticipated increase in background hits caused by the increase in luminosity.

        Speaker: Christian Wessel (DESY)
      • 09:30
        Improving tracking algorithms with machine learning: a case for line-segment tracking at the High Luminosity LHC 25m

        In this work, we present a study on ways that tracking algorithms can be improved with machine learning (ML). We base this study on a line-segment-based tracking (LST) algorithm that we have designed to be naturally parallelized and vectorized in order to efficiently run on modern processors. LST has been developed specifically for the Compact Muon Solenoid (CMS) Experiment at the LHC, towards the High Luminosity LHC (HL-LHC) upgrade. Moreover, we have already shown excellent efficiency and performance results as we iteratively improve LST, leveraging a full simulation of the CMS detector. At the same time, promising deep-learning-based tracking algorithms, such as Graph Neural Networks (GNNs), are being pioneered on the simplified TrackML dataset. These results suggest that parts of LST could be improved or replaced by ML. Thus, a thorough, step-by-step investigation of exactly how and where ML can be utilized, while still meeting realistic HL-LHC performance and efficiency constraints, is implemented as follows. First, a lightweight neural network is used to replace and improve upon explicitly defined track quality selections. This neural network is shown to be highly efficient and robust to displaced tracks while having little-to-no impact on the runtime of LST. These results clearly establish that ML can be used to improve LST without penalty. Next, exploratory studies of GNN track-building algorithms are described. In particular, low-level track objects from LST are considered as nodes in a graph, where edges represent higher-level objects or even entire track candidates. Then, an edge-classifier GNN is trained, and the efficiency of the resultant edge scores is compared with that of the existing LST track quality selections. These GNN studies provide insights into the practicality and performance of using more ambitious and complex ML algorithms for HL-LHC tracking at the CMS Experiment.

        Speaker: Jonathan Guiang (Univ. of California San Diego (US))
      • 10:00
        Novel Approaches for ML-Assisted Particle Track Reconstruction and Hit Clustering 25m

        Track reconstruction is a vital aspect of High-Energy Physics (HEP) and plays a critical role in major experiments. In this study, we delve into unexplored avenues for particle track reconstruction and hit clustering. Firstly, we enhance the algorithmic design by utilizing a "simplified simulator" (REDVID) to generate training data that is specifically designed for simplicity. We demonstrate the effectiveness of this data in guiding the development of optimal network architectures.

        Additionally, we investigate the application of image segmentation networks for this task, exploring their potential for accurate track reconstruction. Moreover, we approach the task from a different perspective by treating it as a hit sequence to track sequence translation problem. Specifically, we explore the utilization of Transformer architectures for tracking purposes. By considering this novel approach, we aim to uncover new insights and potential advancements in track reconstruction.

        Through our comprehensive exploration, we present our findings and draw conclusions based on the outcomes of our investigations. This research sheds light on previously unexplored avenues and provides valuable insights for the field of particle track reconstruction and hit clustering in HEP.

        Speaker: Uraz Odyurt (Nikhef National institute for subatomic physics (NL))
    • 10:30 11:00
      Coffee Break 30m Place du village (Le Village)

      Place du village

      Le Village

    • 11:00 12:00
      YSF Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: David Rousseau (IJCLab-Orsay)
      • 11:00
        Combined track finding with GNN & CKF 15m

        The application of graph neural networks (GNN) in track reconstruction is a promising approach to cope with the challenges that will come with the HL-LHC. They show both good track-finding performance in high pile-up scenarios and are naturally parallelizable on heterogeneous compute architectures.

        Typical HEP detectors have have a high resolution in the innermost layers in order to support vertex reconstruction, but then lower resolution in the outer parts. GNNs mainly rely on 3D space-point information, so this can cause reduced track-finding performance in these regions.

        In this contribution we present a novel combination of GNN-based track finding with the classical Combinatorial Kalman Filter (CKF) algorithm to circumvent this issue: The GNN resolves the track candidates in the inner pixel region, where 3D space points can represent measurements very well. These candidates are then seamlessly picked up by the CKF in the outer regions, which performs well even for 1D measurements, where a space point definition is not clearly given.

        With the help of the infrastructure of the ACTS project, we will show both a proof-of-concept based on truth tracking in the pixels, that allows to estimate achievable improvements for duplicate and fake rate, as well as a dedicated GNN pipeline trained on ttbar events with pileup 200 in the OpenDataDetector.

        Speaker: Benjamin Huth (CERN)
      • 11:20
        Evaluation of Graph Sampling and Partitioning for Edge Classification and Tracking 15m

        Graph Neural Network (GNN) models proved to perform well on the particle track finding problem, where traditional algorithms become computationally complex as the number of particles increases, limiting the overall performance. GNNs can capture complex relationships in event data represented as graphs. However, training on large graphs is challenging due to computation and GPU memory requirements. The graph representation must fit into the GPU memory to fully utilize event data when training the GNN model. Otherwise, the graphs must be divided into smaller batches. We evaluate generic sampling methods that modify the conventional GNN training by using the mini-batch scheme to reduce the amount of required memory and facilitate parallel processing. Although splitting graphs may seem straightforward, striking a balance between computational efficiency and preserving the essential characteristics of the graph is challenging.
        Through empirical experiments, we aim to test and tune graph sampling and partitioning methods to improve the edge classification performance for track reconstruction. Node, edge, and subgraph sampling methods are explored to divide data into smaller mini-batches for training. Preliminary results on the TrackML dataset show performance similar to full-batch training. These results prove the effectiveness of sampling methods in edge-level GNN classification tasks and the possibility of extending training to event graphs exceeding the top-of-the-line GPU’s memory for improved performance.

        Speakers: Alina Lazar, Alina Lazar
      • 11:40
        GNN Track Reconstruction of Non-helical BSM Signatures 15m

        Accurate track reconstruction is essential for high sensitivity to beyond Standard Model (BSM) signatures. However, many BSM particles undergo interactions that produce non-helical trajectories, which are difficult to incorporate into traditional tracking techniques. One such signature is produced by "quirks", pairs of particles bound by a new, long-range confining force with a confinement scale much less than the quirk mass, leading to a stable, macroscopic flux tube that generates large oscillations between the quirk pair. The length scale of these oscillations is dependent on the confinement scale, and in general can be shorter than a micron, or longer than a kilometer. We present a version of the ML-based GNN4ITk track reconstruction pipeline, applied to a custom detector environment for quirk simulation.

        We explore the ability of an SM-trained graph neural network (GNN) to handle BSM track reconstruction out-of-the-box. Further, we explore the extent to which a pre-trained SM GNN requires fine-tuning to specific BSM signatures. Finally, we compare GNN performance with traditional tracking algorithms in the simplified detector environment, for both helical SM and non-helical BSM cases.

        Speaker: Qiyu Sha (Chinese Academy of Sciences (CN))
    • 12:00 14:00
      Lunch Break 2h Place du village (Le Village)

      Place du village

      Le Village

    • 14:00 14:40
      YSF Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Alexis Vallier (L2I Toulouse, CNRS/IN2P3, UT3)
      • 14:00
        Kalman filter for muon reconstruction in the CMS Phase-2 endcap calorimeter 15m

        At the High Luminosity phase of the LHC (HL-LHC), experiments will be exposed to numerous (approx. 140) simultaneous proton-proton collisions. To cope with such harsh environments, the CMS Collaboration is designing a new endcap calorimeter, referred to as the High-Granularity Calorimeters (HGCAL).
        As part of the detector upgrade, a novel reconstruction framework (TICL: The Iterative CLustering) is being developed. The framework uses a hierarchical approach to build physics objects out of energy deposits and employs a wide range of both classical and machine learning algorithms, for different tasks in the reconstruction chain. Even though TICL is under continuous development, it has already shown outstanding performance in particle shower reconstruction.
        In this contribution, the development of a dedicated muon reconstruction within TICL is discussed. Such dedicated reconstruction is crucial for HGCAL, especially for inter-cell calibration and for expanding the global muon reconstruction to regions with pseudorapity >2.4. The Kalman Filter (KF) algorithm is particularly suited to tackle this challenge, and it has already been tested and used extensively in many particle physics experiments for track reconstruction, including CMS. The performance of the KF algorithm for muon reconstruction in HGCAL under various conditions will be presented for the first time, as well as its capabilities and limitations as a tool for inter-cell calibration.

        Speaker: Mark Nicholas Matthewman (HEPHY)
      • 14:20
        Studying a new Primary Vertex (PV) identification algorithm within ACTS framework 15m

        We present a project proposal aimed at improving the efficiency and accuracy of Primary Vertex (PV) identification within the ‘A Common Tracking Software’ (ACTS) framework using the deep learning techniques. Our objective is to establish a primary vertex finding algorithm with enhanced performance for the LHC like environments. This work is focused on finding PVs in simulated ACTS data using a hybrid approach that started with the Kernel Density Estimators (KDEs), analytically derived from the ensemble of charged track parameters, which are then fed to a UNet/UNet++ neural network along with truth PV information. The neural network is trained using a large training dataset and the performance is evaluated on an independent test dataset. By leveraging KDEs and neural networks, our aim is to enhance pattern recognition and feature detection in High Energy Physics (HEP) data. We also plan to conduct a comparative analysis to assess the performance of the newly implemented algorithm against established results from ACTS Adaptive Multi-Vertex Finder (AMVF) algorithm. This work aims to contribute to the ongoing development of data analysis and machine learning techniques in the field of HEP.

        Speaker: Ms Layan AlSarayra
    • 14:40 15:40
      Plenary Auditorium (Le Village)

      Auditorium

      Le Village

      Convener: Giuseppe Cerati (Fermi National Accelerator Lab. (US))
      • 14:40
        Reconstructing charged particle track segments with a quantum-enhanced support vector machine 25m

        Reconstructing the trajectories of charged particles from the collection of hits they leave in the detectors of collider experiments like those at the Large Hadron Collider (LHC) is a challenging combinatorics problem and computationally intensive. The ten-fold increase in the delivered luminosity at the upgraded High Luminosity LHC will result in a very densely populated detector environment. The time taken by conventional techniques for reconstructing particle tracks scales worse than quadratically with track density. Accurately and efficiently assigning the collection of hits left in the tracking detector to the correct particle will be a computational bottleneck and has motivated studying possible alternative approaches. This paper presents a quantum-enhanced machine learning algorithm that uses a support vector machine (SVM) with a quantum-estimated kernel to classify a set of three hits (triplets) as either belonging to or not belonging to the same particle track. The performance of the algorithm is then compared to a fully classical SVM. The quantum algorithm shows an improvement in accuracy versus the classical algorithm for the innermost layers of the detector that are expected to be important for the initial seeding step of track reconstruction. (arXiv:2212.07279)

        Speakers: Marcin Jastrzebski (UCL), Marcin Jastrzebski (University College London)
      • 15:10
        Application of Quantum Annealing with Graph Neural Network Preselection in Particle Tracking at LHC 25m

        Quantum computing techniques have recently gained significant attention in the field. Compared to traditional computing techniques, quantum computing could offer potential advantages for high-energy physics experiments. Particularly in the era of HL-LHC, effectively handling large amounts of data with modest resources is a primary concern. Particle tracking is one of the tasks predicted to be challenging for classical computers in the HL-LHC. Previous studies have demonstrated that quantum annealing (QA), an optimization technique with quantum computer, can achieve particle tracking with an efficiency of over 90%, even in dense environments. To execute the QA process, a Quadratic Unconstrained Binary Optimization (QUBO) object is required. In order to apply the QA technique in particle tracking, hits are pairing up and form a QUBO object. Recent research has implemented and tested a graph neural network (GNN) using simplified samples in the preselection stage of the QA-based tracking algorithm. The current study aims to generalize the dataset and construct a GNN to classify hit pairs within a dense environment. Furthermore, the tracking performance of the standard QA-based tracking algorithm will be compared with that of the GNN-QA tracking algorithm.

        Speaker: Dr Wai Yuen Chan (University of Tokyo (JP))
    • 15:40 16:10
      Summary & Wrap-up Auditorium (Le Village)

      Auditorium

      Le Village

    • 09:00 11:00
      Co-located Real-time Tracking mini-workshop (13 October): see https://indico.cern.ch/e/CTD2023_RealTimeTracking Auditorium (Le Village)

      Auditorium

      Le Village

      https://indico.cern.ch/event/1290426/

    • 11:00 11:30
      Coffee Break 30m Place du village (Le Village)

      Place du village

      Le Village

    • 11:30 13:00
      Co-located Real-time Tracking mini-workshop (13 October): see https://indico.cern.ch/e/CTD2023_RealTimeTracking Auditorium (Le Village)

      Auditorium

      Le Village

      https://indico.cern.ch/event/1290426/

    • 13:00 14:00
      Lunch 1h Place du village (Le Village)

      Place du village

      Le Village