Connecting The Dots 2020

Europe/Paris
Virtual/Digital only workshop

Virtual/Digital only workshop

Description

CTD2020 went virtual

Our Virtual Group photo

(Updated June 11)

The proceedings deadline is has been extended until June 26, 2020.

(Updated May 2) 
Information for submitting proceedings is now available. To help with a timely review process, please submit them by June 12, 2020.

(Updated May 1)
Question/Answer sessions are done. Their recordings are now up on the website (go to timetable, then April 28,29,30, click detailed view, and finally the paperclip to get a link to each one)

(Updated April 27) 
Discussion sessions start Tuesday. Check the timetable for those as well as the slides/recordings from last week.

(Updated April 22)
Last day of recording sessions. The first two days worth of excellent talks can be found in the timetable. Today's talks will be up by tomorrow. Watch the Mattermost channel for more information.

(Updated April 18)
ZOOM connection, participant / presenter information, and Mattermost links added. 

(Updated April 7)
An initial timetable for recording sessions is now available. 

We have opened a new registration form. Please register to get emails about connection information for the workshop (to help us know who has participated and to minimize zoom-bombing or related issues..)

(Updated March 19)
We have decided to organize CTD2020 as to provide sufficient time to discuss everyone's results while not having lengthy video conference meetings. The conference will proceed in 3 stages:

  1. We will organize “recording sessions” for all talks, which are in timeslots convenient for speakers and mostly out of the "primetime" GVA afternoon / US morning. These will be open for anyone to attend. We plan that these sessions will happen the week of April 20-24 as not to change the original schedule of material preparation for the workshop. Anyone with an oral presentation has the same length of time for their presentation as originally scheduled (either 15 or 25 minutes - check your contribution to confirm). Poster contributions have ten minutes.
  2. All contribution recordings are made available via the CTD2020 Indico site for viewing by April 24. A platform such as Mattermost will be used to facilitate interactions as recorded presentations are viewed.  
  3. Finally, there will be 3 discussion sessions for Q/A intended to engage contributors and attendees having already watched the full recorded presentations of interest.  These are planned for April 28-30 16h-18h GVA time. 

More information about scheduling your presentations to follow. If you are not already getting email from us and would like to, please contact the local organizers.

 

CTD2020 to be virtual 

In light of the COVID-19 outbreak, and given the recent evolution in both international travel policies and local Princeton University guidance on hosting events, we have decided to cancel the in-person Connecting The Dots 2020 workshop that was planned for April 20-22. 
 
To capitalize on the interesting program of presentations proposed for this workshop, we will organize a remote/virtual conference to which we can all nonetheless share our ideas and results, ask questions and discuss all contributions to the workshop, and document our results. We will provide more guidance and information about this new format as soon as we have worked through the details with the organizing committee (1-2 weeks from now). Registration fees will be refunded via the Eventbrite service 
 
We will post updates here on the website. We apologize for the inconvenience, and we appreciate your understanding. If you have questions, please email ctd2020-loc@iris-hep.org
 

CTD2020

The Connecting The Dots workshop series brings together experts on track reconstruction and other problems involving pattern recognition in sparsely sampled data. While the main focus will be on High Energy Physics (HEP) detectors, the Connecting The Dots workshop is intended to be inclusive across other scientific disciplines wherever similar problems or solutions arise. 

Princeton University (Princeton, New Jersey, USA) will host the 2020 edition of Connecting The Dots. It is the 6th in the series after: Berkeley 2015Vienna 2016LAL-Orsay 2017Seattle 2018, and Valencia 2019

Registration: CTD2020 will be 2.5 days of plenary sessions, starting Wednesday morning April 22, and finishing around lunch time Friday April 24. The workshop is open to everyone. More information available on the Scientific Program and Timetable pages, however the call for abstracts is now closed.

Workshop registration is open (link will take you to our registration page based on Eventbrite). The regular registration fee is 270 USD and 185 USD for students (either graduate or undergraduate student as of January 1, 2020). After March 6, the registration fee will increase by 25 USD (for non-students only). This fee covers local support, morning and afternoon coffee breaks, two lunches, the welcome reception and workshop dinner. 

   This workshop is partially supported by National Science Foundation grant OAC-1836650 (IRIS-HEP), the Princeton Physics Department and the Princeton Institute for Computational Science and Engineering (PICSciE).

            

Follow us @ #CTD2020

                           

Registration
Virtual conference reg form
Participants
  • Abdellah Tnourji
  • Abhijith Gandrakota
  • Adeel Akram
  • Afaf Wasili
  • Aiham Al Musalhi
  • Ajay Kumar
  • Alan Taylor
  • Alberto Annovi
  • Alexander Leopold
  • Alexander Morton
  • Andreas Salzburger
  • Andrew Groves
  • Anthony Morley
  • Aravind T S
  • Ashley Marie Parker
  • Ashutosh Kotwal
  • Ashwin Samudre
  • Avi Yagil
  • Bastian Schlag
  • Bei Wang
  • Bernadette Kolbinger
  • Carlos Chavez Barajas
  • Caterina Doglioni
  • Catherine Biscarat
  • Cenk Tuysuz
  • Chang-Seong Moon
  • Christian Wessel
  • Christoph Hasse
  • Claire Antel
  • Claudia Gemme
  • Cristina Fernandez Bedoya
  • Daniel Murnane
  • David Brown
  • David Chamont
  • David Lange
  • David Rohr
  • David Rousseau
  • Dejan Golubovic
  • Dimitri Bourilkov
  • Dinyar Rabady
  • Dmitry Emeliyanov
  • Dongsung Bae
  • Duc Hoang
  • Edson Carquin Lopez
  • Elizabeth Sexton-Kennedy
  • Ema Puljak
  • Emery Nibigira
  • Emilio Meschi
  • Eric Ballabene
  • Erica Brondolin
  • Fabian Klimpel
  • Fabrizio Palla
  • FAHMI IBRAHIM
  • Federico Lazzari
  • Florian Reiss
  • Frank Winklmeier
  • Frank-Dieter Gaede
  • François Drielsma
  • Gage DeZoort
  • Georgiana Mania
  • Gianantonio Pezzullo
  • Gianluca Lamanna
  • Giovanni Punzi
  • Giulia Tuci
  • Giuseppe Bagliesi
  • Giuseppe Cerati
  • Gordon Watts
  • Gregory Vereshchagin
  • Hannes Sakulin
  • Heather Gray
  • Helene Guerin
  • Hevjin Yarar
  • Huilin Qu
  • Imahn Shekhzadeh
  • Isobel Ojalvo
  • Ivan Amos Cali
  • Ivan Vila Alvarez
  • Jahred Adelman
  • Jaime Leon Holgado
  • James Kowalkowski
  • Jan Henrik Müller
  • Jan Stark
  • Jan-Frederik Schulte
  • Javier Mauricio Duarte
  • Jean-Roch Vlimant
  • Jenny Regina
  • Jeremy Andrea
  • Jeremy Hewes
  • Jessica Leveque
  • Jiri Masik
  • Joe Osborn
  • John Baines
  • John Haggerty
  • Jonathan Shlomi
  • Junghwan Goh
  • Kajari Mazumdar
  • Karla Pena
  • Karolos Potamianos
  • Katherine Pachal
  • Kazuhiro Terao
  • Kevin Frank Einsweiler
  • Kim Albertsson
  • Koji Terashi
  • Kristiane Novotny
  • Lauren Melissa Osojnak
  • Laurent Basara
  • Leonardo Cristella
  • Liliana Teodorescu
  • Lothar A.T. Bauerdick
  • Louis Henry
  • Louise Skinnari
  • Loukas Gouskos
  • Luca Federici
  • Luca Pontisso
  • Luiza Adelina Ciucu
  • Mandakini Patil
  • Manuel Guth
  • Marcin Wolter
  • Marco Petruzzo
  • Marian Stahl
  • Mark Neubauer
  • Markus Elsing
  • Matthew Basso
  • Matthias Danninger
  • Maurice Garcia-Sciveres
  • Maximilian Emanuel Goblirsch-Kolb
  • mia tosi
  • Miaoyuan Liu
  • Michael David Sokoloff
  • Michel De Cian
  • Mohsen Ghazi
  • Moritz Kiehn
  • Muhammad Ibrahim Abdulhamid
  • Murtaza Safdari
  • Mykola Khandoga
  • Nan Lu
  • Nathan Simpson
  • Nicholas Choma
  • Nick Manganelli
  • Nicola Neri
  • Nicole Michelle Hartman
  • Nikos Konstantinidis
  • Nisha Lad
  • Noemi Calace
  • Nora Emilia Pettersson
  • Panchali Nag
  • Pantelis Kontaxakis
  • Paolo Calafiura
  • Paul Gessinger-Befurt
  • Pawel Bruckman de Renstrom
  • Peter Elmer
  • Peter Hristov
  • Petr Balek
  • Phillip Marshall
  • Qing Lin
  • Rachel Bartek
  • Rachid Mazini
  • Rafael Coelho Lopes De Sa
  • Rafael Teixeira De Lima
  • Rahmat Rahmat
  • ran itay
  • Rebeca Gonzalez Suarez
  • Ricardo Wölker
  • Riccardo Fantechi
  • Robin Newhouse
  • Rocky Bala Garg
  • Rui Zhang
  • Ryunosuke O'Neil
  • Sagar Addepalli
  • Salvador Marti Garcia
  • Sang Eon Park
  • Sanmay Ganguly
  • Saumya Chaturvedi
  • Savannah Jennifer Thais
  • Sean Hughes
  • Sebastian Skambraks
  • Sebastien Rettie
  • Shaun Roe
  • Simon Kurz
  • Sitong An
  • Siyuan Yan
  • Slava Krutelyov
  • Stamatis Poulios
  • Stephen Nicholas Swatman
  • Steven Farrell
  • Thomas Boettcher
  • Thomas Pöschl
  • Thomas Strebler
  • Tim Adye
  • Tobias Stockmanns
  • Todd Adams
  • Torre Wenaus
  • Valentin Volkl
  • Valentina Cairo
  • Valerio Bertacchi
  • Viktor Rodin
  • Vladimir Gligorov
  • Waleed Esmail
  • William Kalderon
  • William Patrick Mccormack
  • Wolfram Erdmann
  • Xiangyang Ju
  • Xiaocong Ai
  • Yutaro Iiyama
  • Zhenbin Wu
  • Zijun Xu
Local organizers
    • 13:00 16:30
      Recording sessions: Recording session 1
      Conveners: Michel De Cian (EPFL - Ecole Polytechnique Federale Lausanne (CH)), Vladimir Gligorov (Centre National de la Recherche Scientifique (FR))
      • 13:00
        Hashing and similarity learning for track reconstruction for the HL-LHC ATLAS detector 15m

        At the High Luminosity Large Hadron Collider (HL-LHC), up to 200 proton-proton collisions happen during a single bunch crossing. This leads on average to tens of thousands of particles emerging from the interaction region. The CPU time of traditional approaches of constructing hit combinations will grow exponentially as the number of simultaneous collisions increases at the HL-LHC, posing a major challenge. A framework for similarity hashing and learning for track reconstruction will be described where multiple small regions of the detector, referred to as buckets, are reconstructed in parallel within the ATLAS simulation framework. New developments based on metric learning for the hashing optimisation will be introduced and new results obtained both with the TrackML dataset [1] as well as ATLAS simulation will be presented.

        [1] Rousseau. D, et al. "The TrackML challenge." 2018.

        Speaker: Moritz Kiehn (Universite de Geneve (CH))
      • 13:20
        The Hybrid Seeding at LHCb 10m

        Scintillating-fibre detectors are high-efficiency, fast readout tracking devices employed through high-energy particle physics, for instance the SciFi tracker in the LHCb upgrade. The hybrid seeding is a stand-alone track reconstruction algorithm for the SciFi. This algorithm is designed in an iterative way, where tracks with a higher momentum, which are easier, are treated in priority. With the addition of topological information and knowledge of an effective track model, this algorithm is able to deal with hit inefficiency and the tight computing constraints of the upgrade, while delivering consistently high reconstruction efficiencies across a large spectrum of tracks corresponding to the diverse physics programme of the experiment. This programme can be extended in intriguing ways by the study of very-displaced decay vertices, corresponding to dark-matter candidates or weakly decaying particles, and which can only be studied using such stand-alone algorithm.

        Speaker: Louis Henry (Instituto de Física Corpuscular (IFIC))
      • 13:35
        Fast parallel Primary Vertex reconstruction for the LHCb Upgrade 15m

        The physics program of the LHCb experiment depends on an efficient and
        precise reconstruction of the primary vertices produced by proton-proton collisions.
        The LHCb Upgrade detector, starting to take data in 2021 with a fully software-
        based trigger, requires an online reconstruction at a rate of 30 MHz, necessitating
        fast vertex finding algorithms. We present a new approach to vertex
        reconstruction and its parallelized implementation on x86 and GPU architectures.

        Speaker: Florian Reiss (Centre National de la Recherche Scientifique (FR))
      • 13:55
        A 30 MHz software trigger and reconstruction for the LHCb upgrade 25m

        The first LHCb upgrade will take data at an instantaneous luminosity of 2E33cm^{-2}s^{-1} starting in 2021. Due to the high rate of beauty and charm signals LHCb has chosen as its baseline to read out the entire detector into a software trigger running on commodity x86 hardware at the LHC collision frequency of 30MHz, where a full offline-quality reconstruction will be performed. In this talk we present the challenges of triggering in the MHz signal era. We pay particular attention to the need for flexibility in the selection and reconstruction of events without sacrificing performance.

        Speaker: Louis Henry (Instituto de Física Corpuscular (IFIC))
      • 14:25
        Using an Optical Processing Unit for tracking and calorimetry at the LHC 25m

        The High Luminosity Large Hadron Collider is expected to have a 10 times higher readout rate than the current state, significantly increasing the computational load required. It is then essential to explore new hardware paradigms. In this work we consider the Optical Processing Units (OPU) from LightOn, which compute random matrix multiplications on large datasets in an analog, fast and economic way, fostering faster machine learning results on a dataset of reduced dimension. We consider two case studies.

        1) “Event classification”: high energy proton collision at the Large Hadron Collider have been simulated, each collision being recorded as an image representing the energy flux in the detector. Two classes of events have been simulated: « signal » are created by a hypothetical supersymmetric particle, and « background » by known processes. The task is to train a classifier to separate the signal from the background. Several techniques using the OPU will be presented, compared with more classical particle physics approaches.

        2) “Tracking”: high energy proton collisions at the LHC yield billions of records with typically 100,000 3D points corresponding to the trajectory of 10.000 particles. Using two datasets from previous tracking challenges, we investigate the OPU potential to solve similar or related problems in high-energy physics, in terms of dimensionality reduction, data representation, and preliminary results.

        Speaker: Dr Laurent Basara (LAL/LRI, Université Paris Saclay)
      • 14:55
        A Quantum Graph Network Approach to Particle Track Reconstruction 15m

        The unprecedented increase of complexity and scale of data is expected in the necessary computation for tracking detectors of the High Luminosity Large Hadron Collider (HL-LHC) experiments. While currently used Kalman filter based algorithms are reaching their limits in terms of ambiguities from increasing number of simultaneous collisions, occupancy, and scalability (worse than quadratic), a variety of machine learning approaches to particle track reconstruction are explored. It has been demonstrated previously by HEP.TrkX using TrackML datasets, that graph neural networks, processing events as a graph connecting track measurements, are a promising solution and can reduce the combinatorial background to a manageable amount and are scaling to a computationally reasonable size. In previous work, we have shown a first attempt of Quantum Computing to Graph Neural Networks for track reconstruction of particles. We aim to leverage the capability of quantum computing to evaluate a very large number of states simultaneously and thus to effectively search in a large parameter space. As the next step in this paper, we present an improved model with an iterative approach to overcome the low accuracy convergence of the initial oversimplified Tree Tensor Network (TTN) model.

        Speaker: Mr Cenk Tuysuz (Middle East Technical University (TR))
      • 15:15
        An updated hybrid deep learning algorithm for identifying and locating primary vertices 10m

        In the transition to Run 3 in 2021, LHCb will undergo a major luminosity upgrade, going from 1.1 to 5.6 expected visible Primary Vertices (PVs) per event, and it will adopt a purely software trigger. We present an improved hybrid algorithm for vertexing in the upgrade conditions. We use a custom kernel to transform the sparse 3D space of hits and tracks into a dense 1D dataset, and then apply Deep Learning techniques to find PV locations using proxy distributions to encode the truth in training data. Last year we reported that training networks on our kernels using several Convolutional Neural Network layers yielded better than 90% efficiency with no more than 0.2 False Positives (FPs) per event. Modifying several elements of the algorithm, we now achieve better than 94% efficiency with a significantly lower FP rate. Where our studies to date have been made using toy Monte Carlo (MC), we are just now beginning to study KDEs produced from complete LHCb Run 3 MC data, including full tracking in the vertex locator rather than proto-tracking. We anticipate showing results from these studies as well.

        Speaker: Marian Stahl (University of Cincinnati (US))
      • 15:30
        Allen: A high level trigger on GPUs for LHCb 25m

        The upgraded LHCb detector will begin taking data in 2021 with a triggerless readout system. As a result the full 30 MHz inelastic collision rate will be processed using a software-only High Level Trigger (HLT). This will allow for dramatic improvements in LHCb's ability to study beauty and charm hadron decays, but also presents an extraordinary technical challenge and has prompted the study of alternative hardware technologies. In this talk I will discuss the Allen project, a framework for implementing LHCb's first stage HLT (HLT1) on GPUs. I will focus on the development and performance of the full HLT1 reconstruction sequence executed on GPUs, including reconstruction algorithms developed and optimized specifically for many-core architectures.

        Speaker: Thomas Julian Boettcher (Massachusetts Inst. of Technology (US))
    • 18:59 19:00
      Workshop introduction 1m
    • 19:00 22:00
      Recording sessions: Recording session 2
      Convener: Matthias Danninger (Simon Fraser University (CA))
      • 19:00
        The Track finder algorithm for the Trigger System of the Mu2e experiment at Fermilab 25m

        The Mu2e experiment at Fermilab searches for the charged-lepton flavor violating conversion of a negative $\mu$ into an $e^-$ in the field of an Al nucleus. The Mu2e goal is to improve by four orders of magnitude the current best limit on the search sensitivity. The main detector consists of a 3.2 m long straw-tube tracker and a crystal calorimeter housed in a 1 T superconducting solenoid.

        Even if the topology of the signal from the $\mu$-conversion, which is represented by a $\sim105$ MeV/c $e^-$, is extremely clean and efficient reconstruction and identification of these $e^-$ tracks is difficult due to the presence of spurious hits, low-energy delta $e^-$ and other lower momenta $e^-$ tracks ($p\in [40,60]$ MeV/c) generated in $\mu$ Decay-In-Orbit processes, $N+ \mu^- \rightarrow N+ e^- + \nu_{\mu} + \bar{\nu}_{e}$, happening in all the parts of the apparatus where $\mu^-$ get stopped.

        The data acquisition (DAQ) system consists of continuous streaming of the data from the readout controller boards to the DAQ server, where we perform the online reconstruction. The trigger system is required to provide:

        • signal efficiency larger than 90%;

        • trigger rate of a few kHz - equivalent to $\sim 7$ Pb/year;

        • processing time of no more than 4 ms/event.

        We present the ``heart'' of the Trigger system that is based on a multi-staged online track reconstruction algorithm. We use two different pattern recognition algorithms, followed by a $\chi^2$-based track fit performed through the hit positions w/o resolving the left-right ambiguity, nor applying any energy loss correction (no Kalman filter). We perform the event selection by applying a series of filters at each stage of the track reconstruction.

        Preliminary studies show that the online track reconstruction will deliver a trigger rate of a few hundreds of Hz with a rate of fake tracks below $\sim 10$ Hz while keeping the signal efficiency larger than 96\%. We also discuss the expected timing performance that has been measured using a prototype of our DAQ system at the Fermi National Laboratory.

        Speaker: Gianantonio Pezzullo (Yale University)
      • 19:30
        Track Clustering with a Quantum Annealer for Primary Vertexing 25m

        Clustering of charged particle tracks along the beam axis is the first step in reconstructing the positions of hadronic interactions, also known as primary vertices, at hadron collider experiments. We demonstrate the use of a 2036 qubit D-Wave quantum annealer to perform track clustering in a limited capacity on artificial events where the positions of primary vertices and tracks resemble those measured by the CMS experiment at the LHC. The algorithm is not a classical-quantum hybrid but relies entirely on quantum annealing, thus allowing us to benchmark the performance of state-of-the-art quantum annealers against simulated annealing on a commercial classical processor. An intriguing quantum advantage is noted for low numbers of primary vertices. Accelerating the execution of the algorithm by modifying annealing schedules and setting inter-qubit entanglements by heuristic methods are discussed. We discuss extensions of this clustering algorithm to multi-dimensional problems commonly encountered in high energy physics and other fields. Implementations of the algorithm on the 5000+ qubit Advantage processor are anticipated.

        Speaker: Dr Souvik Das (Purdue University (US))
      • 20:00
        A muon tracking algorithm for Level 1 trigger in the CMS barrel muon chambers during HL-LHC 10m

        The electronics of the CMS (Compact Muon Solenoid) DT (Drift Tubes) chambers will need to be replaced for the HL-LHC (High Luminosity Large Hadron Collider) operation due to the increase of occupancy and trigger rates in the detector, which cannot be sustained by present system. A system is being designed that will forward asynchronously the totality of the chambers signals to the control room, at full resolution. A new backend system will be in charge of building the trigger primitives of each chamber out of this asynchronous information, aiming at achieving resolutions comparable to the ones that the offline High Level Trigger can obtain nowadays. In this way, the new system will provide improved functionality with respect to present system, allowing to improve the resilience to potential aging situations. An algorithm for the trigger primitive generation that will run in this new backend system has been developed and implemented in firmware. The performance of this algorithm has been validated through different methods: from a software emulation approach to hardware implementation tests. The performance obtained is very good, with optimal timing and position resolutions, close to the ultimate performance of the DT chamber system. One important validation step has included the implementation of this algorithm in a prototype chain of the HL-LHC electronics, which has been operated with real DT chambers under cosmic data taking campaigns. The new trigger primitive generation has been implemented in the so-called AB7, spare uTCA boards from present DT system which host Xilinx Virtex 7 FPGAs. The performance of this prototyping system has been verified and will be presented in this contribution, showing the goodness of the design for the expected functionality during HL-LHC.

        Speaker: Jaime Leon Holgado (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
      • 20:15
        Track Reconstruction on Free Streaming Data at PANDA 25m

        High event rates of up to 20 MHz and continuous detector readout makes the event filtering at PANDA a challenging task. In addition, no hardware-based event selection will be possible due to the similarities between signal and background. PANDA is among a new generation of experiments utilizing a fully software-based event filtering. Currently, detector hits are often pre-sorted into events by the hardware filter before being passed to the software-based event filter with track reconstruction and event building. Track reconstruction will play a key part in the online filtering at PANDA where it will be used together with the event building.

        To ensure the quality of the track reconstruction, the existing quality assurance task has been modified to be able to cope with free streaming data. This talk will address the candidates for online track reconstruction algorithms for free streaming data based e.g. on the Cellular Automaton. The quality assurance procedure and the results from the tracking at different event rates and level of event mixing is presented.

        Speaker: Jenny Regina (Uppsala University)
      • 20:45
        Minimum Pt Track Reconstruction in ATLAS 15m

        In the most recent year of data-taking with the ATLAS detector at the Large Hadron Collider (LHC), the minimum pT of reconstructed tracks was 500 MeV. This bound was set to reduce the amount of combinatorial problem solving required and to save disk space, which is a challenge in high pileup environments. However, most proton-proton collisions at the LHC will result in a large number of soft particles. While ATLAS does have two frameworks in place for performing low-pT tracking in low pileup runs, for some analyses, the reconstruction of these soft particles in high pileup can provide important information. This talk will explain a method of tracking in high pileup where low-pt tracks are reconstructed in a second tracking pass after default tracking and will elaborate on problems such as seed optimization, hit selection, and offline track selection requirements. Additionally, in order to prevent a large increase in the per-event reconstruction time, tracks are only reconstructed within a “region of interest”, which is defined event-by-event. This method of tracking has been developed and tested by a team searching for photon-induced WW production at the LHC. Other analyses should be able to use this tracking method too; for example, charm tagging can be improved by reconstructing low-pt particles.

        Speaker: William Patrick Mccormack (Lawrence Berkeley National Lab. (US))
      • 21:05
        Parallelizable Track Pattern Recognition in High-Luminosity LHC 25m

        The high instantaneous luminosity conditions in the High Luminosity Large Hadron Collider (HL-LHC) pose major computational challenges for the collider experiments. One of the most computationally challenging components is the reconstruction of charged-particle tracks. In order to efficiently operate under these conditions, it is crucial that we explore new and faster methods or implementations of charged-particle track reconstruction than what is being used today. Kalman-filter-based methods of the track pattern recognition that are currently used in the LHC experiments are inherently sequential and iterative and therefore cannot easily be accelerated through parallelization or vectorization by modern processors, such as graphics processing units (GPUs) or multicore processors. There have been attempts with great effort in vectorizing Kalman-filter-based methods of the track pattern recognition on modern processors with success. In this work, we instead start with a segment-linking-based algorithm that can be naturally parallelized and vectorized and is expected to run efficiently on modern processors. We established a preliminary segment-linking-based track pattern recognition for the CMS experiment using the Phase-II outer tracker and our findings and implications are presented here. This work is building on experience gained from a prototype of a similar approach studied in a different tracker layout geometry based on ideal detector simulation previously presented at CHEP2016.

        Speaker: Philip Chang (Univ. of California San Diego (US))
    • 13:00 16:40
      Recording sessions: Recording session 3
      Convener: David Rousseau (LAL-Orsay, FR)
      • 13:00
        Level-1 Track Finding at CMS for the HL-LHC 25m

        The success of the CMS physics program at the HL-LHC requires maintaining sufficiently low trigger thresholds to select processes at the electroweak scale. With an average expected 200 pileup interactions, critical to achieve this goal while maintaining manageable trigger rates is in the inclusion of tracking in the L1 trigger. A 40 MHz silicon-based track trigger on the scale of the CMS detector has never before been built; it is a novel handle, which in addition to maintaining trigger rates can enable entirely new physics studies.

        The main challenges of reconstructing tracks in the L1 trigger are the large data throughput at 40 MHz and the need for a trigger decision within 12.5 µs. To address these challenges, the CMS outer tracker for HL-LHC uses modules with closely-spaced silicon sensors to read out only hits compatible with charged particles above 2-3 GeV ("stubs"). These are used in the back-end L1 track finding system, implemented based on commercially available FPGA technology. The ever-increasing capability of modern FPGAs combined with their programming flexibility is ideal for a fast track finding algorithm. The proposed algorithm forms track seeds ("tracklets") from pairs of stubs in adjacent layers of the outer tracker. These seeds provide roads where consistent stubs are included to form track candidates. Track candidates sharing multiple stubs are combined prior to being fitted. A Kalman Filter track fitting algorithm is employed to identify the final track candidates and determine the track parameters. The system is divided into nine sectors in the r-phi plane, where the processing for each sector is performed by a dedicated track finding board.

        This presentation will discuss the CMS L1 track finding algorithm and its implementation, present simulation studies of the estimated performance, and discuss the developments of hardware demonstrators.

        Speaker: Louise Skinnari (Northeastern University (US))
      • 13:30
        Displaced Event Classification Using Graph Networks 15m

        A highly interesting, but difficult to trigger on, signature for Beyond Standard Model searches is massive long-lived particles decaying inside the detector volume. Current detectors and detection methods optimised for detecting prompt decays and rely on indirect, additional energetic signatures for online selection of displaced events during data-taking. Improving the trigger-level detection efficiency for displaced events would strongly increase the reach of Beyond Standard Model searches.

        In this work the problem of detecting the presence of displaced vertices in a $\chi^+\chi^- \rightarrow W^+W^-\chi^0\chi^0$ process is studied both under, and without realistic pileup in an ATLAS-like detector setting. Two implementations working on hit-level data are discussed: a baseline deep neural network is compared to a Graph Network implementation based on the message-passing framework. Particular focus is put on the latter due to its capabilities to handle variable length and relational data.

        Tentative results indicate an excellent performance of the Graph Network under no-pileup conditions. The abstract will be updated to reflect continuing progress.

        Speaker: Kim Albertsson (Lulea University of Technology (SE))
      • 13:50
        Tracking performance with the HL-LHC ATLAS detector 10m

        During the High-Luminosity Phase 2 of LHC, scheduled to start in 2026, the ATLAS detector is expected to collect more than 3 ab$^{-1}$ of data at an instantaneous luminosity reaching up to $7.5×10^{34}~\mathrm{cm}^{-2}.\mathrm{s}^{-1}$, corresponding to about 200 inelastic proton-proton collisions per bunch crossing. In order to cope with the large radiation doses and to maintain the physics performance reached during Phase 1, the current ATLAS Inner Detector will be replaced with a new all-silicon Inner Tracker (ITk) and completed with a new High-Granularity Timing Detector (HGTD) in the forward region. In this talk, the latest results on the expected ITk tracking performance and HGTD timing reconstruction will be presented, including their impact on physics object reconstruction.

        Speaker: Zachary Michael Schillaci (Brandeis University (US))
      • 14:05
        Performance of Belle II tracking on collision data. 25m

        The tracking system of Belle II consists of a silicon vertex detector (VXD) and a cylindrical drift chamber (CDC), both operating in a magnetic field created by the main solenoid of 1.5 T and final focusing magnets. The tracking algorithms employed at Belle II are based on a standalone reconstruction in SVD and CDC as well as on a combination of the two approaches, they employ a number of machine learning methods for optimal performance. The tracking reconstruction is tested on the collision data collected in 2018 and 2019. The first experience with data introduced additional challenges which are mitigated with the introduction of new algorithms such as track finding seeded by calorimeter clusters and CDC cross-talk filtering.

        Speaker: Simon Thomas Kurz
      • 14:35
        Progress towards a 4D fast tracking pixel detector 25m

        We present recent results of the R&D for a novel 4D fast tracking system based on rad-hard pixel sensors and front-end electronics capable of reconstructing four dimensional particle trajectories in real time. Particularly relevant results are: i) timing resolution of 30 ps for 55 micron pitch 3D silicon pixel sensors measured in a recent beam test, ii) design and production of front-end electronics prototype chip, iii) a stub-based fast tracking algorithm implemented and tested in commercial FPGA using a pipelined architecture. Tracking performance for a 4D pixel detector for a future upgrade of the LHCb experiment will be also discussed.

        Speaker: Marco Petruzzo (Università degli Studi e INFN Milano (IT))
      • 15:05
        Fast tracking for the HL-LHC ATLAS detector 25m

        During the High-Luminosity Phase 2 of LHC, up to 200 simultaneous inelastic proton-proton collisions per bunch crossing are expected. This poses a significant challenge for the track reconstruction and its associated computing requirements due to the unprecedented number of particle hits in the tracker system. In order to tackle this issue, dedicated algorithms have been developed in order to speed up the track reconstruction and further optimise the default algorithms. The performance of this Fast Track Reconstruction will be presented and compared to the one of the default Phase 2 track reconstruction.

        Speaker: Fabian Klimpel (Max-Planck-Institut fur Physik (DE))
      • 15:35
        Exploring (Quantum) Track Reconstruction Algorithms for non-HEP applications 20m

        The expected increase in simultaneous collisions creates a challenge for accurate particle track reconstruction in High Luminosity LHC experiments. Similar challenges can be seen in non-HEP trajectory reconstruction use-cases, where tracking and track evaluation algorithms are used. High occupancy, track density, complexity and fast growth therefore exponentially increase the demand of algorithms in terms of time, memory and computing resources.
        While traditionally Kalman filter (or even simpler algorithms) are used, they are expected to scale worse than quadratically and thus strongly increasing the total processing time. Graph Neural Networks (GNN) are currently explored for HEP, but also non HEP trajectory reconstruction applications. Quantum Computers with their feature of evaluating a very large number of states simultaneously are therefore good candidates for such complex searches in large parameter and graph spaces.
        In this paper we present our work on implementing a quantum-based graph tracking machine learning algorithm to evaluate Traffic collision avoidance system (TCAS) probabilities of commercial flights.

        Speakers: Mrs Kristiane Novotny (GluoNNet), Kristiane Novotny
    • 19:00 22:00
      Recording sessions: Recording session 4
      Convener: Maurice Garcia-Sciveres (Lawrence Berkeley National Lab. (US))
      • 19:00
        Application of the Deep Sets architecture to track-based flavour tagging with the ATLAS detector 25m

        Flavour Tagging is a major client for tracking in particle physics experiments at high energy colliders, where it is used to identify the experimental signatures of heavy flavour production. Among other features, charm and beauty hadron decays produce jets containing several tracks with large impact parameter. This work introduces a new architecture for Flavour Tagging, based on Deep Sets, which models the jet as a set of tracks. Such approach is an evolution with respect to the Recurrent Neural Network (RNN) currently adopted in the ATLAS experiment, which treats track collections as a sequence. The Deep Sets algorithm uses track impact parameters and kinematics within a permutation-invariant architecture, leading to a significant decrease in training and evaluation time, thus allowing for much faster turn-around times for optimisation. We compare the Deep Sets algorithm with current ATLAS Flavour Tagging benchmarks, provide an outlook on novel methods to explore and interpret the information the network has actually learnt in the training process.

        Speaker: Nicole Michelle Hartman (SLAC National Accelerator Laboratory (US))
      • 19:30
        Graph Neural Networks for Track Finding 25m

        To address the unprecedented scale of HL-LHC data, the Exa.TrkX (previously HEP.TrkX) project has been investigating a variety of machine learning approaches to particle track reconstruction. The most promising of these solutions, graph neural networks (GNN), process the event as a graph that connects track measurements (detector hits corresponding to nodes) with candidate line segments between the hits (corresponding to edges). Detector information can be associated with nodes and edges, enabling a GNN to propagate the embedded parameters around the graph and predict node-, edge- and graph-level observables.

        Previously, message-passing GNNs have shown success in predicting doublet likelihood, and we here report updates on the state-of-the-art architectures for this task. In addition, the Exa.TrkX project has investigated innovations in both graph construction, and embedded representations, in an effort to achieve fully learned end-to-end track finding.

        Hence, we present a suite of extensions to the original model, with encouraging results for both graph construction, classification and track parameter regression. We explore increased performance from trainable graph construction, and the inclusion of detector-level data. These feed into a high-accuracy N-plet classifier, a track parameter regression GNN, or can be used as an end-to-end track classifier by clustering in an embedded space. A set of post-processing methods improve performance with knowledge of the detector physics. Finally, we present a platform for efficient exploration of the plethora of GNN architectures, many of which were applied to this problem.

        Speaker: Daniel Murnane (Lawrence Berkeley National Laboratory)
      • 20:00
        Kinematic Kalman Filter Track Fit 25m

        We will present the implementation of a kinematic Kalman filter-based track fit. The kinematic fit uses time as the free parametric variable to describe the charged particle’s path through space, and as an explicit fit parameter (t0). The fit coherently integrates measurements from sensors where position is encoded as time (ie drift cells) with pure time sensors and geometric (solid-state) position sensors, including time-domain correlations. The kinematic formulation implicitly defines the particle mass and propagation direction, and provides a natural relativistic interface to both particle momentum and position. We will show results from testing the fit using a toy MC, and compare its performance to a conventional geometric Kalman filter fit when both are applied to simulations of the straw tracker design of the Mu2e experiment.

        Speaker: David Brown (Lawrence Berkeley Lab)
      • 20:30
        Learned Representations from Lower-Order Interactions for Efficient Clustering 15m

        Efficient agglomerative clustering is reliant on the ability to exploit useful lower-order information contained within data, yet many real-world datasets do not consist of features which are naturally amenable to metric functions as required by these algorithms. In this work, we present a framework for learning representations which contain such metric structure, allowing for efficient clustering and neighborhood queries of data points. We demonstrate how this framework fits in with both traditional clustering pipelines, and more advanced approaches such as graph neural networks. Finally, we present numerical results on the TrackML particle tracking challenge dataset, where our framework shows favorable results in both physics-based tracking methods, and new end-to-end deep learning approaches with graph neural networks developed in the context of the Exa.TrkX project.

        Speaker: Mr Nicholas Choma (Lawrence Berkeley National Laboratory)
      • 20:50
        Performance of the Z Trigger under Luminosity Conditions: First Experience 10m

        Since April 2019, the data taking phase of the completed Belle II detector at
        SuperKEKB has started. The high beam currents and the nanobeam scheme
        of SuperKEKB demand an efficient first level trigger system to reject the dominant background from outside of the interaction region before being filtered further by
        accurate, but more time-consuming software algorithms. The Neural z vertex trigger, implemented in hardware, uses a three layer MLP neural network to estimate the z vertices of the tracks in an event within the latency of the first trigger level. It is the first of its kind and allow relaxing the track trigger conditions for interesting physics events with low track multiplicity. After the concept was
        sucessfully tested in software simulations, it was deployed in the
        trigger system of the detector before the winter shutdown in 2019.
        We will give an insight into extensive efficiency studies of our trigger subsystem and our efforts of optimizing the performance resulting in an increased trigger
        efficiency of low multiplicity events.

        Speaker: Felix Meggendorfer (Max-Planck-Institute for Physics)
      • 21:05
        Requirements, Status and plans for track reconstruction of the sPHENIX experiment 25m

        sPHENIX is a new experiment being constructed at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. The primary physics goals of sPHENIX are to measure jets, their substructure, and the upsilon resonances in both p+p and Au+Au collisions. To realize these goals, a tracking system composed of a time projection chamber and several silicon detectors will be used to identify tracks corresponding to jets and upsilon decays. However, the sPHENIX experiment will collect approximately 200 PB of data utilizing a finite size computing center, and thus, due to the large occupancy of heavy-ion collisions, track reconstruction in a timely manner remains a challenge. This talk will discuss the sPHENIX experiment, its track reconstruction, and the need for the implementation of faster track fitting algorithms, such as that provided by the ACTS package, into the sPHENIX software stack.

        Speaker: Joseph Osborn (Oak Ridge National Laboratory)
      • 21:35
        Rescuing VBF Higgs Invisible Events with Novel Vertex Selection 10m

        ATLAS event reconstruction requires the identification and selection of a hard-scatter (HS) primary vertex among the multiple interaction vertices reconstructed in each event. In Run 3, the HS vertex candidate is selected based on the largest sum of squared transverse momenta over the associated tracks. While this method works very well in events containing hard physics objects within the tracker acceptance, it suffers from low efficiency in final states containing forward and/or low $p_T$ jets, such as in the case of VBF Higgs to invisible (VBF H$\rightarrow$ZZ(*)$\rightarrow$4$\nu$), where the correct primary vertex is chosen correctly only 80% of the time.

        In order to overcome this challenge and improve the signal acceptance for VBF Higgs invisible events, we introduce two novel ideas. First, we propose a new vertex selection algorithm that combines tracking with calorimeter jet information to overcome the challenge of events with low $p_T$ jets. This new algorithm improves the vertex selection efficiency by 9%. Second, to address the case of events containing forward jets outside the tracking acceptance, we introduce the concept of vertex confidence. We classify events as high/low confidence based on the amount of track $p_T$ associated to hard jets. Events in which the majority of the total jet $p_T$ is outside the $\eta$ acceptance of the tracker are classified as low vertex confidence events and no vertex is chosen for this category. For VBF Higgs invisible events, we find that 80% of the events are classified as high confidence, and the new algorithm selects the correct primary vertex 97% of the time for these events. The remaining 20% of the events are classified as low confidence VBF events events, for which no attempt is made to assign a HS vertex even though there is still a VBF jet signature.

        In LHC Run-4, where the upgraded ATLAS Inner Tracker (ITk) provides an extended acceptance of $|\eta|<4$, the new vertex selection algorithm improves upon the current selection technique by 9% by addressing the challenges of vertex selection under HL-LHC conditions.

        Speaker: Murtaza Safdari (SLAC National Accelerator Laboratory (US))
    • 13:00 16:45
      Recording sessions: Recording session 5
      Convener: Markus Elsing (CERN)
      • 13:00
        Overview of online and offline reconstruction in ALICE for LHC Run 3 25m

        In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous readout of minimum bias Pb-Pb collisions.
        The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage.
        The main challenges include processing and compression of 100 times more events per second than in Run 2, identification of removable TPC tracks and clusters not used for physics, tracking of TPC data in continuous readout, the TPC space charge distortion calibrations, and in general running more reconstruction steps online compared to Run 2.
        ALICE will leverage GPUs to facilitate the synchronous processing with the available resources.
        For the best GPU resource utilization, we plan to offload also several steps of the asynchronous reconstruction to the GPU.
        In order to be vendor independent, we support CUDA, OpenCL, and HIP, and we maintain a common C++ source code that also runs on the CPU.
        We will give an overview of the global reconstruction and tracking strategy, a comparison of the performance on CPU and different GPU models, the scaling of the reconstruction with the input data size, as well as estimates of the required resources in terms of memory and processing power.

        Speaker: David Rohr (CERN)
      • 13:30
        40 MHz Scouting with Deep Learning in CMS 15m

        A 40 MHz scouting system at CMS would provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements and, in some cases, calibrations, and it has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the L1 accept budget, or with requirements which are orthogonal to “mainstream” physics, such as long-lived particles.

        Deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher-level features from the raw inputs. A series of studies on different aspects of LHC data processing have demonstrated the potential of deep learning for CERN applications. The usage of deep learning aims at improving physics performance and reducing execution time.

        This talk will present a deep learning approach to muon scouting in the Level-1 Trigger of the CMS detector. The idea is to utilize multilayered perceptrons to 're-fit' the Level-1 muon tracks, using fully reconstructed offline tracking parameters as the ground truth for neural network training. The network produces corrected helix parameters (transverse momentum, eta and phi), with a precision that is greatly improved over the standard Level 1 reconstruction. The network is executed on an FPGA-based PCIe board produced by Micron Technology, the SB-852. It is implemented using the Micron Deep Learning Accelerator inference engine. The methodology for developing deep learning models will be presented, alongside the process of compiling the models for fast inference hardware. The metrics for evaluating performance and the achieved results will be discussed.

        Speaker: Dejan Golubovic (University of Belgrade (RS))
      • 13:50
        A novel reconstruction framework for an imaging calorimeter for HL-LHC 15m

        To sustain the harsher conditions of high luminosity LHC in 2026, the CMS experiment has designed a novel endcap calorimeter that uses silicon sensors to achieve radiation tolerance, with the additional benefit of a very high readout granularity. In regions characterised by lower radiation levels, small scintillator tiles with individual SiPM readout are employed. A novel reconstruction approach is being developed to fully exploit the granularity and other significant features of the detector like precision timing, with a view to deployment in the high pileup environment of HL-LHC. An iterative reconstruction framework (TICL) has been put in place, and is being actively developed. The inputs to the framework are clusters of energy deposited in individual calorimeter layers delivered by a density-based algorithm which has recently been developed and tuned. In view of the expected pressure on the computing capacity in the HL-LHC era, the algorithms and their data structured are being designed with GPUs in mind. Preliminary results show that significant speed-up can be obtained running the clustering algorithm on GPUs. Moreover, machine learning techniques are being investigated and integrated into the reconstruction framework. This talk will describe the approaches being considered and show first results.

        Speakers: Loukas Gouskos (CERN), Marco Rovere (CERN), Arabella Martelli (Imperial College (GB)), Dr Leonardo Cristella (CERN), Felice Pantaleo (CERN)
      • 14:10
        Global least squares alignment with Kalman Filter fitted tracks 25m

        The Kalman Filter approach to fitting charged particle trajectories is widespread in modern complex tracking systems. At the same time, the global fit of the detector geometry using Newton-Raphson fitted tracks remains the baseline method to achieve efficient and reliable track-based alignment which is free from weak-mode biases affecting physics measurements. A brief reminder of the global least squares formalism for track-based alignment and how Kalman Filter fitted tracks can be equivalently used for the global fit as well as potential computational benefits and use of additional constraints will be reviewed.

        Speaker: Pawel Bruckman De Renstrom (Polish Academy of Sciences (PL))
      • 14:40
        A Machine Learning based 3D Track Trigger for Belle II 25m

        The Belle II experiment at the B-Factory SuperKEKB in Tsukuba, Japan performs precision tests of the standard model and searches for new physics. Due to the high luminosity and beam currents, Belle II faces severe rates of background tracks displaced from the collision region, which have to be rejected within the tight timing constraints of the first level trigger. To this end, a novel neural network z vertex trigger has been implemented on parallel FPGA hardware and integrated into the track trigger pipeline. Presently the results of a 2D Hough finder, which uses only the axial wire hits from the central drift chamber, are combined with stereo wire hits and drift-times to form the input for the robust neural network 3D track reconstruction. In this contribution a machine learning based 3D track finder is proposed, which improves the track finding efficiency by using the additional stereo wire hits in the preprocessing step for the neural network input. An experimentally optimized configuration of its parameters is presented and the benefits and impact on the neural network performance are evaluated on early Belle II data and simulated low multiplicity events.

        Speaker: Sebastian Skambraks (Max-Planck-Institut für Physik)
      • 15:10
        An optical network for accelerating real-time tracking with FPGAs 15m

        The “Artificial Retina” is a highly-parallelized tracking architecture that promises high-throughput, low-latency, and low-power when implemented in state-of-art FPGA devices.

        Working on not-builded events, the “Artificial Retina” needs a dedicated distribution network of large bandwidth and low latency, delivering to every FPGA the hits required to perform track reconstruction. This is a technologically challenging part of the system that has not yet been tested in a life-size application.

        The upgraded LHCb DAQ is an ideal environment for a first test of this methodology. In Run-3, the Level-0 hardware trigger will be removed, therefore LHCb will readout events at the full LHC collision rate (30 MHz), build them, and deliver them to the HLT system. The present work is part of a wider Real Time Analysis project, that was formed with the purpose of organizing data processing within this novel environment, and represents an R&D towards future fast technologies for real-time tracking.

        We used as benchmark the reconstruction of tracks within the new VELO-pixel detector, that is composed of a limited number of readout units. We present the design of the fast optical network, carrying a total b/w of ~15 Tb/s, devised for the purpose of realizing VELO tracking in real time, and the results of our tests of an actual prototype assembled and integrated in a vertical slice of the upgraded LHCb DAQ.

        Speaker: Federico Lazzari (Universita & INFN Pisa (IT))
      • 15:30
        ACTS Vertexing and Deep Learning Vertex Finding 15m

        The reconstruction of particle trajectories and their associated vertices is an essential task in the event reconstruction of most high energy physics experiments.
        In order to maintain or even improve upon the current performance of tracking and vertexing algorithms under the upcoming challenges of increasing energies and ever increasing luminosities in the future, major software upgrades are required.
        Based on the well-tested ATLAS tracking and vertexing software, ACTS (A Common Tracking Software) aims to provide a modern, experiment-independent set of track- and vertex reconstruction software, specifically designed for parallel execution.
        Exploiting modern software concepts, thread-safe implementations of iterative and multi-adaptive primary vertex finding algorithms, as well as a full Billoir vertex fitter, Z-Scan- and Gaussian track density seed finder, are available in ACTS and being deployed in the multi-threaded version of the ATLAS software framework AthenaMT.
        In addition to these computationally optimized reimplementations of classical primary vertexing algorithms, all of which have been validated against the original ATLAS implementations, ACTS provides a solid code base for evaluating new approaches to primary vertex finding, such as applications of sophisticated deep learning methods.
        Associating tracks to the correct vertex candidate is a crucial step in vertexing and will become even more important in the high-pileup environments expected for HL-LHC or FCC-hh in order not to merge close-by vertices.
        Learning a track representation in an embedding space in such a way that tracks emerging from a common vertex are close together while tracks from neighboring vertices are further separated from one another allows for the determination of a similarity score between a pair of tracks.
        Constructing undirected, edge-weighted graphs from these results allows the subsequent usage of classical graph algorithms or graph neural networks for clustering tracks to vertex candidates.
        The current status of the ACTS vertexing as well as new results on deep learning approaches to vertex finding will be presented in this talk.

        Speaker: Bastian Schlag (CERN / JGU Mainz)
      • 15:50
        Fast pattern recognition for ATLAS track triggers in HL-LHC 25m

        A fast hardware based track trigger is being developed in ATLAS for the High Luminosity upgrade of the Large Hadron Collider (HL-LHC). The goal is to provide the high-level trigger with full-scan tracking at 100 kHz and regional tracking at 1 MHz, in the high pile-up conditions of the HL-LHC. A method for fast pattern recognition using the Hough transform is investigated. In this method, detector hits are mapped onto a 2D parameter space with one parameter related to the transverse momentum and one to the initial track direction. The performance of the Hough transform is studied at different pile-up values and compared, using full event simulation of events with average pile-up of 200, with a method based on matching detector hits to pattern banks of simulated tracks stored in a custom made Associative Memory ASICs. As a possible way to reduce the number of hit clusters that need to be considered by this system, and taking advantage of the new ATLAS Inner Tracker, the use of track stub finding and extrapolation is investigated. A preliminary discussion of the resulting hit reduction and associated speedup, and any associated performance loss, will be presented.

        Speaker: William Kalderon (Brookhaven National Laboratory (US))
    • 19:00 22:20
      Recording sessions: Recording session 6
      Convener: David Lange (Princeton University (US))
      • 19:00
        Graph Neural Networks for Reconstruction in Liquid Argon Time Projection Chambers 15m

        This talk will discuss work carried out by the Exa.TrkX collaboration to explore the application of Graph Neural Network (GNN)-based techniques for reconstructing particle interactions in wire-based Liquid Argon Time Projection Chambers (LArTPCs). LArTPC detector technology is utilised by many neutrino experiments, including future flagship US neutrino experiment DUNE, and techniques for fully automated event reconstruction are still in active development. Using reference to previous applications of such GNN approaches in HEP, this talk will discuss the unique challenges posed when reconstructing in LArTPC detectors and how those challenges might be overcome. It will describe the application of different GNN-based techniques for reconstruction tasks such as formation of 3D spacepoints from 2D hits, determination of spacepoint directionality and clustering of spacepoints.

        Speaker: Jeremy Edmund Hewes (University of Cincinnati (US))
      • 19:20
        Data Reconstruction Using Deep Neural Networks for Particle Imaging Neutrino Detectors 25m

        A Liquid Argon Time Projection Chamber (LArTPC) is type of particle imaging detectors that can record an image of charged particle trajectories with high (~mm/pixel) spatial resolution and calorimetric information. In the intensity frontier of high energy physics, LArTPC is a detector technology of choice for number of experiments including Short Baseline Neutrino program and Deep Underground Neutrino Experiment for high precision neutrino oscillation measurements to answer fundamental questions of the universe. However, the analysis of detailed particle images can be difficult, and high quality data reconstruction chain for a large scale (over 100 tonne) LArTPC detector remains challenging. The research team at SLAC leads the R&D of Machine Learning (ML) based full data reconstruction chain for LArTPC detectors. Our chain is a multi-task network cascade that performs pixel feature extraction (semantic segmentation using Sparse U-Net with ResNet modules), particle start/end point prediction (Point Proposal Network), pixel clustering for particle instance identification (custom convolution and instance attention layers), and particle flow analysis using Graph Neural Networks (GNNs). The result of the chain is fully reconstructed event information that can be used by physicists to infer the neutrino oscillation physics. This R&D takes a significant step forward from the current state of the art in the experimental neutrino physics. In this talk, we present our reconstruction chain development using open data set. Our software is made publicly available to improve reproducibility and transparency of our research work.

        Speaker: François Drielsma (Universite de Geneve (CH))
      • 19:50
        Graph neural networks for FPGAs 25m

        Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the Level-1, FGPA-based trigger, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with less than 1 $\mu$s latency on an FPGA. To do so, we consider representative tasks associated with particle reconstruction and identification in a next-generation calorimeter operating at a particle collider. We use graph architectures developed for these purposes and simplified in order to match the computing constraints of real-time event processing at the CERN Large Hadron Collider. The trained models are compressed using pruning and quantization. Using the hls4ml library, we convert the compressed models into firmware to be implemented on an FPGA. We show results both in terms of model accuracy and computing performance.

        Speaker: Yutaro Iiyama (CERN)
      • 20:20
        Particle Clustering and Flow Reconstruction for Particle Imaging Neutrino Detectors Using Graph Neural Networks 10m

        Machine learning (ML) techniques, in particular deep neural networks (DNNs) developed in the field of Computer Vision, have shown promising results to address the challenge of analyzing data from a big, high resolution particle imaging detector such as Liquid Argon Time Projection Chambers (LArTPCs), employed in accelerator-based neutrino experiments including Short Baseline Neutrino (SBN) program and Deep Underground Neutrino Experiment (DUNE). Convolutional neural networks (CNNs) have been the de-facto choice for image feature extraction tasks, and they are particularly powerful for identifying locally dense features. On the other hand, Graph Neural Networks (GNNs) have been studied actively for analyzing correlation features between distant objects. Example applications for LArTPC detectors include signal correlations between two independent detectors (optical detectors and TPCs), reconstruction of particle hierarchy (e.g. a primary particle vs. secondary radiation), and clustering of particles per primary interaction in a busy "neutrino pile-up" environment, expected at the DUNE near detector under high neutrino beam intensity. In this talk, we present our work on utilizing GNNs in the ML-based full data reconstruction chain for LArTPC detectors.

        Speaker: Dr François Drielsma (Universite de Geneve (CH))
      • 20:35
        Applying Submanifold Sparse CNN in MicroBooNE 25m

        The MicroBooNE experiment employs a Liquid Argon Time Projection Chamber (LArTPC) detector to measure sub-GeV neutrino interactions from the muon neutrino beam produced by the Booster Neutrino Beamline at Fermilab. The detector consists of roughly 90 tonne of liquid argon in which 3D trajectories of charged particles are recorded by combining timing with information from 3 wire planes, each producing a 2 dimensional projected image. Neutrino oscillation measurements, such as those performed in MicroBooNE rely on the capability to distinguish between different flavors of neutrino interactions leading to different types of final state particles. Deep Convolutional Neural Networks (CNNs) present high success for these tasks; however, due to the large sparsity of the data (only < 5% pixels are non-zero), a naive approach of applying CNNs with a standard linear algebra package becomes highly inefficient in both computation time and memory resources.  Recently Submanifold Sparse Convolutional Networks (SSCNs) have been proposed to address this challenge and have successfully applied to analyze large LArTPC images in MicroBooNE with orders of magnitude improvement in computing resource usage. In this talk, I will present the performance of SSCNs on the task of Semantic Segmentation applied in the analysis of simulated MicroBooNE data.

        Speaker: Dr Ran Itay (SLAC)
      • 21:25
        Parallelizing the unpacking and clustering of detector data for reconstruction of charged particle tracks on multi-core CPUs and many-core GPUs 10m

        Charged particle tracking is the most computationally intensive step of event reconstruction at the LHC. Due to the computational cost, the current CMS online High Level Trigger only performs track reconstruction in detector regions of interest identified by the hardware trigger or other detector elements. We have made significant progress towards developing a parallelized and vectorized implementation of the combinatoric Kalman filter algorithm for track building that would allow efficient global reconstruction of the entire event within the projected online CPU budget. Global reconstruction necessarily entails the unpacking and clustering of the hit information from all the silicon strip tracker modules; however, currently only modules selected by the regional reconstruction are processed. Therefore, we have recently begun to investigate improving the efficiency of the unpacking and clustering steps.

        In this talk, we report recent progress in the integration of the Kalman filter track builder mkFit with the CMS data processing framework, and improvements in its track building physics performance. We present results from parallelizing the unpacking and clustering steps of the raw data from the silicon strip modules so that all the required hit information for global reconstruction can be produced efficiently. We include performance evaluations of the new unpacking and clustering algorithms on Intel Xeon and NVIDIA GPU architectures, along with updated track building performance results on Intel Xeon.

        Speaker: Bei Wang (Princeton University (US))
      • 21:40
        Tracking performance with ACTS 25m

        The reconstruction of charged particles’ trajectories is one of the most complex and CPU consuming parts of event processing in high energy experiments, in particular at future hadron colliders such as the High-Luminosity Large Hadron Collider (HL-LHC). Highly-performant tracking software exploiting both innovative tracking algorithms and modern computing architectures with many cores and accelerators are necessary to maintain and improve the tracking performance.

        Based on the tracking experience at LHC, the ACTS project encapsulates the current ATLAS software into an experiment-independent and framework-independent software designed for modern computing architectures. It provides a set of high-level track reconstruction tools which are agnostic to the details of the detection technologies and magnetic field configuration and tested for strict thread-safety to support multi-threaded event processing. It supports contextual detector conditions, which can include having multiple detector alignments or calibrations in memory with a minimal memory footprint. Tracking infrastructures such as tracking geometry, Event Data Model, and propagator are well developed and validated in ACTS. The prototype of tracking algorithms for tracking fitting, track seeding and vertex reconstruction are available with the performance currently underin validation.

        In this talk, I will introduce the available tracking features in ACTS software and focus on the implemented track fitting using a full-resolution Kalman Fitter and track finding which is based on the sequential Kalman filtering. The tracking performance will be highlighted with prototype detectors. An early study of using ACTS for the Belle experiment will be shown as well. I’ll also talk about the thoughts about achieving possible speed-up of those algorithms by implementing them on accelerators.

        Speaker: Xiaocong Ai (UC Berkeley)
      • 22:10
        Manifold reconstruction using linear approximations 10m

        We study a method to reconstruct a nonlinear manifold embedded in Euclidean space from point cloud data using only linear approximations. Such an approximation is possible by warping the submanifold via an embedding to a higher dimensional Euclidean space. The subsequent reduction in the curvature can be justified using techniques from geometry. The immediate use of this formalism is in denoising submanifolds (with bounded and zero-mean noise); and we will use the linear version of the manifold moving least squares method after choosing an appropriate map. We would show preliminary results from three different noisy datasets: reconstruction of noisy spectra of a very high dimensional matrix, track reconstruction and parameter estimation from the tracker hit datasets used for top quark identification; and finally in order to illustrate the advantage of the linear approximation we would consider an overfitting problem often encountered when a complex model is used for the shape of the parton distribution function fitting, using one of the NNPDF3.1 datasets.

        Speaker: Ms Panchali Nag (Duke University)
    • 08:00 08:20
      Detect New Physics with Deep Learning Trigger at the LHC 20m

      The Large Hadron Collider has an enormous potential of discovering physics beyond the Standard Model, given the unprecedented collision energy and the large variety of production mechanisms that proton-proton collisions can probe. Unfortunately, only a small fraction of the produced events can be studied, while the majority of the events are rejected by the online filtering system. One is then forced to decide upfront what to search and miss a new physics that might hide in unexplored "corners" of the search region. We propose a model-independent anomaly detection technique, based on deep autoencoders, to identify new physics events as outliers of the standard event distribution in some latent space. We discuss how this algorithm could be designed, trained, and operated within the tight latency of the first trigger level of a typical general-purpose LHC experiment.

      Speaker: Zhenbin Wu (University of Illinois at Chicago (US))