Connecting The Dots 2022

US/Eastern
Princeton University

Princeton University

Description

CTD 2022 is complete. Thank you to everyone for participating.

CTD 2022 Workshop photo 

(Credit: Rick Soden)

May 13: An updated timetable of talks is now available, as is some logistical information for getting to campus and getting around to the venue, etc. We will continue adding information, but please reach out with questions for topics we haven't covered yet.

Registration deadlines: The in-person registration deadline has past. Please contact the organizers with questions. Virtual/Remote registration is still open.

The Connecting The Dots workshop series brings together experts on track reconstruction and other problems involving pattern recognition in sparsely sampled data. While the main focus will be on High Energy Physics (HEP) detectors, the Connecting The Dots workshop is intended to be inclusive across other scientific disciplines wherever similar problems or solutions arise. 

Princeton University (Princeton, New Jersey, USA) will host the 2022 edition of Connecting The Dots. It is the 7th in the series after: Berkeley 2015Vienna 2016LAL-Orsay 2017Seattle 2018, Valencia 2019, and virtual in 2020. The workshop will be followed by a topical mini-workshop on graphical neural networks for tracking on June 3. 

Registration: CTD2022 will be 3 days of plenary sessions, starting Tuesday (late) morning May 31, and finishing June 2. The workshop is open to everyone and will use a hybrid format to facilitate remote participation. More information available on the Scientific Program regarding the scientific topics covered by the workshop.  The call for abstracts is open and has a deadline of March 19.

Workshop registration is open (link will take you to our registration page based on Eventbrite). The regular registration fee is 270 USD and 185 USD for students (either graduate or undergraduate student). After May 1, the registration fee will increase by 25 USD (for non-students only). This fee covers local support, morning and afternoon coffee breaks, lunches, the welcome reception and workshop dinner.

We encourage everyone interested to join us in person, however CTD2022 will be a hybrid event to facilitate remote participation for those that can not. The fee for remote participants will be 25 USD (free for students) to support the hybrid aspects of the workshop.

   This workshop is partially supported by National Science Foundation grant OAC-1836650 (IRIS-HEP), the Princeton Physics Department and the Princeton Institute for Computational Science and Engineering (PICSciE).

            

Follow us @ #CTD2022

                           

Participants
  • Abdelrahman Elabd
  • Adeel Akram
  • Adriano Di Florio
  • Aheesh Chandrakant Hegde
  • Alejandro Maza Villalpando
  • Alessandro Scarabotto
  • Alexander Leopold
  • Alexis Vallier
  • Alina Lazar
  • Aman Desai
  • Andreas Salzburger
  • Andrii Tykhonov
  • André Günther
  • Anna Alicke
  • Annabel Kropf
  • Apurva Narde
  • Apurva Narde
  • Arijit Sengupta
  • Balaji Venkat Sathia Narayanan
  • Balasubramaniam K. M.
  • Bartosz Sobol
  • Benjamin Michael Wynne
  • Benji Gayther
  • Beom Ki Yeo
  • Bernadette Kolbinger
  • Brij Kishor Jashal
  • Carlos Francisco Erice Cid
  • Charline Rougier
  • Christopher Edward Brown
  • Daniel Thomas Murnane
  • Daniele Dal Santo
  • Daniele Dal Santo
  • Dantong Yu
  • Dantong Yu
  • David Lange
  • David Spataro
  • Dhanush Anil Hangal
  • Donal Joseph Mclaughlin
  • Florian Reiss
  • Gage DeZoort
  • Ganapati Dash
  • Ganapati Dash
  • Georgiana Mania
  • Giuseppe Cerati
  • Haider Abidi
  • Heather Gray
  • Imene OUADAH
  • Irina Ene
  • Isobel Ojalvo
  • Izaac Sanderswood
  • Jackson Carl Burzynski
  • Jahred Adelman
  • Javier Mauricio Duarte
  • Javier Mauricio Duarte
  • Jeremy Couthures
  • Jessica Leveque
  • Jiri Masik
  • Joachim Zinsser
  • Joe Osborn
  • Jonathan Long
  • José Luis Carrasco Huillca
  • Kaushal Gumpula
  • Konrad Aleksander Kusiak
  • Liv Helen Vage
  • Louis-Guillaume Gagnon
  • Makayla Vessella
  • Markus Elsing
  • Mary Touranakou
  • Matthias Danninger
  • Maurice Garcia-Sciveres
  • Michel De Cian
  • Nilotpal Kakati
  • Nisha Lad
  • Omar Adel Alterkait
  • Pallabi Das
  • Paolo Calafiura
  • Paolo Sabatini
  • Peter Elmer
  • Pierfrancesco Butti
  • Rahmat Rahmat
  • RAJEEV SINGH
  • Rocky Bala Garg
  • Sabine Elles
  • Salvador Marti I Garcia
  • Savannah Jennifer Thais
  • Sebastian Dittmeier
  • Sebastien Rettie
  • Sebastien Rettie
  • Sevda Esen
  • Shih-Chieh Hsu
  • Shujie Li
  • SRIDHAR TRIPATHY
  • Stephanie Kwan
  • Stephanie Majewski
  • Subhendu Das
  • Sylvain Caillou
  • Thomas Boettcher
  • Thomas Boettcher
  • Waleed Esmail
  • Xiangyang Ju
  • Yara Mohamed Shousha
  • Yee Chinn Yap
Local organizers
    • 09:00 10:30
      Plenary
      Convener: Salvador Marti I Garcia (IFIC-Valencia (UV/EG-CSIC))
      • 09:00
        Welcome and Introduction 25m
        Speaker: David Lange (Princeton University (US))
      • 09:30
        Allen in the First Days of Run 3 25m

        The LHCb collaboration recently began commisioning an upgraded detector that will be read out at the full 30 MHz LHC event rate. Events will be reconstructed and selected in real time using a GPU-based software trigger called Allen. In this talk, I'll present the status of Allen and discuss how it is evolving as Run 3 begins. In particular, I'll focus on the process of preparing Allen to confront real data and discuss our experiences operating a GPU-based trigger in the early days of Run 3.

        Speaker: Thomas Boettcher (University of Cincinnati (US))
      • 10:00
        Deep Learning Track Reconstruction for TeV-PeV Cosmic Rays in Space 25m

        Astroparticle physics is experiencing a new era of direct precision measurements in space at the highest energies. DArk Matter Particle Explorer (DAMPE) launched in 2015 has recently published the first results on Cosmic Ray proton and helium spectra up to 100 TeV and 80 TeV kinetic energy respectively. The successor mission to be launched in the nearest future, the High Energy Radiation Detector (HERD), is targeted further at the PeV scale.

        Track pattern recognition at DAMPE and HERD is a key factor limiting the measurement accuracy. Due to the vast multiplicity of secondary particles at high-energy interactions in the detector, the primary signal is heavily obscured, track reconstruction is becoming a needle in a haystack problem. New tracking techniques are critically demanded in order to fully uncover the science potential of DAMPE and HERD instruments.

        In this talk, we present the first findings of the PeVSPACE project, which employs state-of-the-art Deep Learning methods to fundamentally improve the quality of particle tracking for space atroparticle missions at the highest energies. We also give a brief overview of the DAMPE experiment and demonstrate the first application of the developed techniques to the analysis of the DAMPE data.

        Speaker: Andrii Tykhonov (Universite de Geneve (CH))
    • 10:30 10:33
      Poster session
      • 10:30
        Improving the performance of dealing with non-ideal tracks within ATLAS reconstruction 3m

        Within high transverse momentum jet cores, the separation between charged-
        particles is reduced to the order of the sensor granularity in the ATLAS tracking detectors, resulting in overlapping charged-particle measurements in the detector. This can degrade the efficiency of reconstructing charged-particle trajectories. This presentation identifies the issues within the current reconstruction algorithms that cause the reduction in reconstruction efficiency and explores an enhanced selection to recover some of the lost efficiency. The presentation will also discuss machine learning techniques to aid with recovering and removing bad quality track candidates.

        Speaker: Donal Joseph Mc Laughlin (UCL)
    • 11:00 12:30
      Plenary PCTS conference room 4th floor (Jadwin Hall, Princeton University)

      PCTS conference room 4th floor

      Jadwin Hall, Princeton University

      Convener: Salvador Marti I Garcia (IFIC-Valencia (UV/EG-CSIC))
      • 11:00
        Track Finding and Neural Network-Based Primary Vertex Reconstruction with FPGAs for the Upgrade of the CMS Level-1 Trigger System 25m

        The CMS experiment will be upgraded to take advantage of the rich and ambitious physics opportunities provided by the High Luminosity LHC. Part of this upgrade will see the first level (Level-1) trigger use charged particle tracks from the full outer silicon tracker as an input for the first time. The reconstruction of these tracks begins with on-detector hit suppression, identifying hits (stubs) from charged particles with transverse momentum ($p_T$) > 2 GeV within the tracker modules themselves. This reduces the hit rate by one order of magnitude with 15,000 stubs being produced per bunch crossing. Dedicated off-detector electronics using high performance FPGAs are used to find track candidates at 40 MHz using a road-search based pattern matching step. These are then passed to a combinatorial Kalman filter that performs the track fit for $\mathcal{O}$(200) tracks. This overall track finding algorithm is described, as is the ongoing work developing the track fitting firmware and a boosted decision tree approach to track quality estimation.

        The tracks are used in a variety of ways in downstream algorithms, in particular primary vertex finding in order to identify the hard scatter in an event and separate the primary interaction from an additional 200 simultaneous interactions. A novel approach to regress the primary vertex position and to reject tracks from additional soft interactions uses a lightweight 1000 parameter end-to-end neural network. This neural network possesses simultaneous knowledge of all stages in the reconstruction chain, which allows for end-to-end optimisation. The improved performance of this network versus a baseline approach in the primary vertex regression and track-to-vertex classification is shown. A quantised and pruned version of the neural network has been deployed on an FPGA to match the stringent timing and computing requirements of the Level-1 Trigger. Finally, integration tests between the track finder and vertexing firmware are shown using prototype hardware.

        Speaker: Christopher Edward Brown (Imperial College (GB))
      • 11:30
        The Tracking at Belle II 25m

        The tracking system of Belle II consists of a silicon vertex detector (VXD) and a cylindrical drift chamber (CDC), both operating in a magnetic field created by the main solenoid of 1.5 T and final focusing magnets. The experiment is taking data since 2019 with high-quality and stable tracking performance. We present the tracking-based calibration of the beams characteristics and their interaction region, which are critical for the many high-precision measurements in the physics program. We also discuss possible improvements in the silicon strip vertex detector track finding, exploiting Hough transformation instead of the current one.

        Speakers: Radek Zlebcik (Charles University), Radek Zlebcik (Deutsches Elektronen-Synchrotron (DE))
      • 12:00
        Learning to Discover workshops 25m

        This talk summarizes the recently concluded Learning to Discover workshop series on Artificial Intelligence and High Energy Physics.

        Speaker: Andreas Salzburger (CERN)
    • 12:30 12:33
      Poster session
      • 12:30
        ExaTrkX as a Service 3m

        Particle tracking plays a pivotal role in almost all physics analyses at the Large Hadron Collider. Yet, it is also one of the most time-consuming parts of the particle reconstruction chain. In recent years, the Exa.TrkX group has developed a promising machine learning-based pipeline that performs the most computationally expensive part of particle tracking, the track finding. As the pipeline obtains competitive physics performance on realistic data, accelerating the pipeline to meet the computational demands becomes an important research direction, that can be categorized as either software-based or hardware-based. Software-based inference acceleration includes model pruning, tensor operation fusion, reduced precision, quantization, etc. Hardware-based acceleration explores the usage of different coprocessors, such as GPUs, TPUs, and FPGAs.

        In this talk, we describe the Exa.TrkX pipeline implementation as a Triton Inference Server for particle tracking. Clients will send track-finding requests to the server and the server will return track candidates to the client after processing. The pipeline contains three discrete deep learning models and two CUDA-based algorithms. Because of the heterogeneity and dependency chain of the pipeline, we will explore different server settings to maximize the throughput of the pipeline, and we will study the scalability of the inference server and time reduction of the client.

        Speaker: Xiangyang Ju (Lawrence Berkeley National Lab. (US))
    • 14:00 15:30
      Plenary PCTS Conference room, 4th floor (Jadwin Hall, Princeton University)

      PCTS Conference room, 4th floor

      Jadwin Hall, Princeton University

      Convener: Markus Elsing (CERN)
      • 14:00
        Implementation of ACTS into LDMX track reconstruction 25m

        The Light Dark Matter eXperiment (LDMX) is a planned electron-beam fixed-target missing-momentum experiment that has unique sensitivity to light Dark Matter in the sub-GeV range.
        The tracker is a low-mass, fast, silicon-based detector divided into two sub-detectors: a tagger tracker upstream of the target used to accurately measure the incoming electron and a recoil tracker downstream optimized to maximize the recoil electron acceptance. Both of the trackers are installed within a dipole providing a 1.5 T field with the recoil tracker immersed within the fringe field.
        LDMX also features a high-granularity Si-W sampling electromagnetic calorimeter (ECAL), for accurate shower discrimination and minimum ionizing particle tracking and a scintillator based sampling hadronic calorimeter primarily as a background veto detector. Tracking is of paramount importance to LDMX reaching its physics goals as it has to efficiently and precisely reconstruct particles trajectories between 50MeV – 8GeV in the trackers, with an accurate treatment of material effects and magnetic field non-uniformity.
        Tracking will also be used to reconstruct hadron tracklets in the ECal which are important in rejecting rare photo-nuclear reactions.
        For these reasons, the LDMX experiment has implemented a reconstruction framework based on the A Common Tracking Software (ACTS) suite. This talk aims to report the performance status of ACTS in this tracking environment, including comparisons of different track fitting algorithms provided, as well as results linked to the usage of Machine Learning techniques for seed finding and tracking in dense materials.

        Speaker: Pierfrancesco Butti (SLAC National Accelerator Laboratory (US))
      • 14:30
        Line Segment Tracking in the HL-LHC 25m

        The major challenge posed by the high instantaneous luminosity in the High Luminosity LHC (HL-LHC) motivates efficient and fast reconstruction of charged particle tracks in a high pile-up environment. While there have been efforts to use modern techniques like vectorization to improve the existing classic Kalman Filter based reconstruction algorithms, we take a fundamentally different approach by doing a bottom-up reconstruction of tracks. Our algorithm, called Line Segment Tracking, constructs small track stubs from adjoining detector regions, and then successively links these track stubs that are consistent with typical track trajectories. Since the production of these track stubs is localized, they can be made in parallel, which lends way into using architectures like GPUs and multi-CPUs to take advantage of the parallelism. We establish an implementation of our algorithm in the context of the CMS Phase-2 Tracker which runs on NVIDIA Tesla V100 GPUs, and measure the physics performance and the computing time.

        Speaker: Balaji Venkat Sathia Narayanan (Univ. of California San Diego (US))
      • 15:00
        traccc - GPU Track reconstruction demonstrator for HEP 25m

        In the future HEP experiments, there will be a significant increase in computing power required for track reconstruction due to the large data size. As track reconstruction is inherently parallelizable, heterogeneous computing with GPU hardware is expected to outperform the conventional CPUs. To achieve better maintainability and high quality of track reconstruction, a host-device compatible event data model and tracking geometry are necessary. However, such a flexible design can be challenging because many GPU APIs restrict the usage of modern C++ features and also have a complicated user interface. To overcome those issues, the ACTS community has launched several R&D projects: traccc as a GPU track reconstruction demonstrator, detray as a GPU geometry builder, and vecmem as a GPU memory management tool. The event data model of traccc is designed using the vecmem library, which provides an easy user interface to host and device memory allocation through C++ standard containers. For a realistic detector design, traccc utilizes the detray library which applies compile-time polymorphism in its detector description. A detray detector can be shared between the host and the device, as the detector subcomponents are serialized in a vecmem-based container. Within traccc, tracking algorithms including hit clusterization and seed finding have been ported to multiple GPU APIs. In this presentation, we highlight the recent progress in traccc and present benchmarking results of the tracking algorithms.

        Speaker: Beomki Yeo (Lawrence Berkeley National Lab. (US))
    • 15:30 15:36
      Poster session
      • 15:30
        Linearized Track-Fitting on an FPGA 3m

        For the ATLAS experiment at the High-Luminosity LHC, a hardware-based track-trigger was originally envisioned, which performs pattern recognition via AM ASICs and track fitting on an FPGA.
        A linearized track fitting algorithm is implemented in the Track-Fitter that receives track candidates as well as corresponding fit-constants from a database and performs the $\chi^2$-test of the track as well as calculates the helix-parameters.
        A prototype of the Track-Fitter has been set-up on a Intel Stratix 10 FPGA.
        Its firmware was tested in simulation-studies and verified on the hardware.
        The performance of the Track-Fitter has been evaluated in extensive > simulation studies and these results will be presented in this talk.

        Speaker: Joachim Zinsser (Ruprecht Karls Universitaet Heidelberg (DE))
      • 15:33
        Optimizing the Exa.TrkX Inference Pipeline for Manycore CPUs 3m

        The reconstruction of charged particle trajectories is an essential component of high energy physics experiments. Recently proposed pipelines for track finding, built based on the Graph Neural Networks (GNNs), provide high reconstruction accuracy, but need to be optimized, in terms of speed, especially for online event filtering. Like other deep learning implementations, both the training and inference of particle tracking methods can be optimized to fully benefit from the GPU’s parallelism ability. However, the inference of particle reconstruction could also benefit from multicore parallel processing on CPUs. In this context, it is imperative to explore the impact of the number of cores of a CPU on the speed related performance of running inference. Using PyTorch and Facebook AI Similarity Search (Faiss) library multiple CPU threads capability, multiprocessing for the filtering inference loop and the weakly connected components algorithm for labeling results in faster latency times for the inference pipeline. This tracking pipeline based on the Graph Neural Networks (GNNs) is evaluated on multi-core Intel Xeon Gold 6148s Skylake and Intel Xeon 8268s Cascade Lake CPUs. Computational time is measured and compared using different numbers of cores per task. The experiments show that the multi-core parallel execution outperforms the sequential one.

        Speaker: Alina Lazar
    • 16:00 18:00
      Plenary PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Markus Elsing (CERN)
      • 16:00
        ATLAS ITk Track Reconstruction with a GNN-based Pipeline 25m

        Graph Neural Networks (GNNs) have been shown to produce high accuracy performance on track reconstruction in the TrackML challenge. However, GNNs are less explored in applications with noisy, heterogeneous or ambiguous data. These elements are expected from HL-LHC ATLAS Inner Tracker (ITk) detector data, when it is reformulated as a graph. We present the first comprehensive studies of a GNN-based track reconstruction pipeline on ATLAS-generated ITk data. Significant challenges exist in translating graph methods to this dataset, including processing shared spacepoints, and cluster-to-spacepoint mapping. We analyze several approaches to low-latency and high-efficiency graph construction from this processed data. Further, innovations in GNN training are required for ITk, for example memory management for the very large ITk point clouds, and novel constructions of loss for noisy spacepoints and background tracks.

        Following these upgrades to the earlier ExaTrkx pipeline, we are able to present the shared state-of-the-art physics performance on full-detector, full-pileup ITk events. We show that a GNN-based pipeline maintains tracking efficiency that is robust to the significant backgrounds and volume-to-volume variation within the detector, across a wide range of pseudorapidity and transverse momenta. We present a set of reconstruction cuts adapted to the GNN pipeline, and see competitive performance compared to the current ATLAS reconstruction chain. Several methods for post-processing GNN output are explored, for either very fast triplet seeding on GPU, or for recovering efficiency with learned embeddings of tracklets and with Kalman Filters. Finally, the performance of different configurations of GNN architecture are considered, for several possible hardware and latency configurations.

        Speaker: Charline Rougier (Laboratoire des 2 Infinis - Toulouse, CNRS / Univ. Paul Sabatier (FR))
      • 16:30
        Graph Neural Networks for Pattern Recognition & Fast Track Finding 25m

        Particle track reconstruction is a challenging problem in modern high-energy physics detectors where existing algorithms do not scale well with a growing data stream. The development of novel methods incorporating machine learning techniques is a vibrant and ongoing area of research. In the past two years, algorithms for track pattern recognition based on graph neural networks (GNNs) have emerged as a particularly promising technique. Previous research has included edge & node classification via training multi-layered perceptrons. Here we present a novel and unique approach to track finding utilising a GNN-based architecture in an unsupervised manner, allowing the network to learn patterns as it evolves.
        The development of the GNN-based framework leverages information aggregation to iteratively improve the precision of track parameters and extract compatible track candidates. To efficiently exploit a priori knowledge about charged particle dynamics, Gaussian mixture techniques and Kalman filters are embedded within the track following network. Gaussian mixtures are used to approximate the densities of track states and Kalman filtering is used as a mechanism for information propagation across the neighbourhood, as well as track extraction. The excitation/inhibition rules of individual edge connections are designed to facilitate the “simple-to-complex” approach for “hits-to-tracks” association, such that the network starts with low hit density regions of an event and gradually progresses towards more complex areas.
        We discuss preliminary results from the application of the GNN-based architecture on the TrackML dataset; a simulation of a LHC-inspired tracking detector. Track reconstruction efficiency and track purity metrics are also presented. This work aims at implementing a realistic GNN-based algorithm for fast track finding that can be deployed in the ATLAS detector at the LHC experiment.

        Speaker: Nisha Lad (UCL)
      • 17:00
        Graph Neural Networks for Charged Particle Tracking on FPGAs 25m

        The determination of charged particle trajectories in collisions at the CERN Large Hadron Collider (LHC) is an important but challenging problem, especially in the high interaction density conditions expected during the future high-luminosity phase of the LHC (HL-LHC). Graph neural networks (GNNs) are a type of geometric deep learning algorithm that has successfully been applied to this task by embedding tracker data as a graph---nodes represent hits, while edges represent possible track segments---and classifying the edges as true or fake track segments. However, their study in hardware- or software-based trigger applications has been limited due to their large computational cost. In this talk, we introduce an automated translation workflow, integrated into a broader tool called hls4ml, for converting GNNs into firmware for field-programmable gate arrays (FPGAs). We use this translation tool to implement GNNs for charged particle tracking, trained using the TrackML challenge dataset, on FPGAs with designs targeting different graph sizes, task complexites, and latency/throughput requirements. This work could enable the inclusion of charged particle tracking GNNs at the trigger level for HL-LHC experiments.

        Speaker: Abdelrahman Elabd (IRIS-HEP)
    • 18:00 18:03
      Poster session
      • 18:00
        Software Performance of the ATLAS Track Reconstruction for LHC Run 3 3m

        This poster summarizes the main changes to the ATLAS experiment’s Inner Detector track reconstruction software chain in preparation for LHC Run 3 (2022-2024). The work was carried out to ensure that the expected high-activity collisions (with on average 50 simultaneous proton-proton interactions per bunch crossing, pile-up) can be reconstructed promptly using the available computing resources while maintaining good physics performance. Performance figures in terms of CPU consumption for the key components of the reconstruction algorithm chain and their dependence on the pile-up are shown, as well as physics performance figures of preliminary data from Run-3 commissioning. For the design pile-up value of 60 the updated track reconstruction is a factor of 2 faster than the previous version.

        Speaker: Makayla Vessella (University of Massachusetts (US))
    • 18:30 20:00
      Poster session: Poster Session / Reception Frick Laboratory patio

      Frick Laboratory patio

    • 09:00 10:30
      Plenary PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Andreas Salzburger (CERN)
      • 09:00
        Vecpar - a portable parallelization library 25m

        As High Energy Physics collider experiments continue to push the boundaries of instantaneous luminosity, the corresponding increase in particle multiplicities pose significant computing challenges. Although most of today’s supercomputers provide shared-memory nodes and accelerators to boost the performance of scientific applications, the usage of latest hardware has little impact unless the code exposes parallelism. Therefore, several frameworks have been developed to support parallel execution on different architectures, but the performance gain is usually obtained by either compromising on portability or requiring significant effort to adopt, both of which can be issues for experiments’ developer communities.
        In this talk, we introduce a new portability framework which aims to reduce the programming effort in porting and maintaining code that targets heterogeneous architectures, while ensuring improved wall-clock run times over the initial sequential implementation. Different parallelization strategies and ways of combining them are also discussed together with the known limitations. Additionally, we show preliminary performance measurements on a 4th order Runge-Kutta-Nyström stepper implementation from the ACTS GPU R&D project detray, which demonstrate the library’s potential even though it is still in the early development stage.

        Speaker: Georgiana Mania (Deutsches Elektronen-Synchrotron (DE))
      • 09:30
        Present and future of online tracking in CMS 25m

        We describe the expected evolution of tracking algorithms in the CMS high-level trigger including Run 3 and HL-LHC. Results will include those from the recent CMS DAQ technical design report and how the adoption of heterogeneous architectures enables novel tracking approaches.

        Speaker: Adriano Di Florio (Politecnico e INFN, Bari)
      • 10:00
        Accelerated graph building for particle tracking graph neural nets 25m

        The CMS experiment is undergoing upgrades that will increase the average pileup from 50 to 140, and eventually 200. The high level trigger at CMS will experience an increase in data size by a factor of five. With current algorithms, this means that almost 50% of the high level trigger time budget is spent on particle track reconstruction. Graph neural nets have shown promise as an alternative algorithm for particle tracking. They are still subject to several constraints, e.g. momentum cuts, or not allowing for missing hits in a track. The graphs also have several orders of magnitude more fake edges than real, causing slow graph building. Alternative ways of building the graphs are explored to address these limitations. Reinforcement learning and seeded graph building are both introduced as potential alternatives. Some preliminary results suggest that reinforcement learning can result in quicker graph building with fewer physics restrictions.

        Speaker: Liv Helen Vage (Imperial College (GB))
    • 10:30 10:40
      Workshop photo 10m
    • 11:10 12:25
      YSF Plenary PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Andreas Salzburger (CERN)
      • 11:10
        Jet Flavor Tagging Using Graph Neural Networks 15m

        Flavor tagging, the identification of jets originating from b and c quarks, is a critical component of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC). Current flavor tagging algorithms rely on the outputs of "low level" taggers, which focus on particular approaches for identifying the experimental signature of heavy flavor jets. These low level taggers are a mixture of handcrafted algorithms and trained machine learning models.

        A new approach currently under development aims to replace this process with a single machine learning model which can be trained end-to-end and does not require inputs from any existing low level taggers, leading to a reduced overall complexity of the model. The model uses a custom Graph Neural Network (GNN) architecture to combine information from a variable number of tracks within a jet in order to simultaneously predict the flavor of the jet, the partitioning of tracks in the jet into vertices, and information about the decay chain which produced tracks in the jet. These auxiliary training tasks are shown to improve performance, but importantly also increase the explainability of the model.

        This approach compares favourably with existing state of the art methods, and has the potential to significantly improve performance in cases where the current low level tagging algorithms are not optimally tuned, for example c-jet identification, and flavor tagging at high transverse momentum. Due to significantly reduced need for manual optimisation, application of this method could lead to improved performance with significantly reduced person power. The model is also being investigated for uses extending beyond standard b- and c-tagging applications, for example Xbb tagging, and the reconstruction of displaced decays for LLP searches.

        Speaker: Nilotpal Kakati (Weizmann Institute of Science (IL))
      • 11:30
        One person’s trash is another person’s treasure: expanding physics reach with unused tracks 15m

        The physics reach of the LHCb detector can be extended by reconstructing particles with a long lifetime that decay downstream of the dipole magnet, using only hits in the furthest tracker from the interaction point. This allows for electromagnetic dipole moment measurements, and increases the reach of beyond the Standard Model long-lived particle searches. However, using tracks to reconstruct particles decaying in this region is challenging, particularly due to the increased combinatorics and reduced momentum and vertex resolutions, which is why it has not been done until now. New approaches have been developed to meet the challenges and obtain worthwhile physics from these previously unused tracks. This talk presents the feasibility demonstration studies performed using Run-2 data, as well as new developments that expand these techniques for further gains in Run-3.

        Speaker: Izaac Sanderswood (Univ. of Valencia and CSIC (ES))
      • 11:50
        LHCb's Forward Tracking algorithm for the Run 3 CPU-based online track reconstruction sequence 15m

        In Run 3 of the LHC the LHCb experiment faces very high data rates containing beauty and charm hadron decays. Thus the task of the trigger is not to select any beauty and charm events, but to select those containing decays interesting for the LHCb physics programme. LHCb has therefore implemented a real-time data processing strategy to trigger directly on fully reconstructed events. The first stage of the purely software-based trigger is implemented on GPUs performing a partial event reconstruction. In the second stage of the software trigger, the full, offline-quality event reconstruction is performed on CPUs, with a crucial part being track reconstruction, balancing track finding efficiency, fake track rate and event throughput. In this talk, LHCb's CPU-based track reconstruction sequence for Run 3 is presented, highlighting the "Forward Tracking", which is the algorithm that reconstructs charged particle trajectories traversing all of LHCb's tracking detectors. To meet event throughput requirements, the "Forward Tracking" uses SIMD instructions in several core parts of the algorithm, such as the Hough transform and cluster search. These changes led to an event throughput improvement of the algorithm of 60%.

        Speaker: André Günther (Ruprecht Karls Universitaet Heidelberg (DE))
      • 12:10
        Improved Track Reconstruction Performance for Long-lived Particles in ATLAS 15m

        Searches for long-lived particles (LLPs) are among the most promising avenues for discovering physics beyond the Standard Model at the LHC. However, displaced signatures are notoriously difficult to identify due to their ability to evade standard object reconstruction strategies. In particular, the default ATLAS track reconstruction applies strict pointing requirements which limit sensitivity to charged particles originating far from the primary interaction point. To recover efficiency for LLPs decaying within the tracking detector volume, ATLAS employs a dedicated large-radius tracking (LRT) pass with loosened pointing requirements, taking as input the hits left over from the primary track reconstruction. During Run 2 of the LHC, the LRT implementation was highly efficient but suffered from a large number of incorrectly reconstructed track candidates ("fakes") which prohibited it from being run in the standard reconstruction chain. Instead, a small subset of the data was preselected for LRT reconstruction using information from the standard object reconstruction. In preparation for LHC Run 3, ATLAS has completed a major effort to improve both standard and large-radius track reconstruction performance which allows for LRT to run in all events, expanding the potential phase-space of LLP searches and streamlining their analysis workflow. This talk will highlight the above achievement and report on the readiness of the ATLAS detector for track-based LLP searches in Run 3.

        Speaker: Jackson Carl Burzynski (Simon Fraser University (CA))
    • 14:00 15:30
      Plenary PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Isobel Ojalvo (Princeton University (US))
      • 14:00
        Event Filter Tracking for the Upgrade of the ATLAS Trigger and Data Acquisition System 25m

        This submission describes revised plans for Event Filter Tracking in the upgrade of the ATLAS Trigger and Data Acquisition System for the high pileup environment of the High-Luminosity Large Hadron Collider (HL-LHC). The new Event Filter Tracking system is a flexible, heterogeneous commercial system consisting of CPU cores and possibly accelerators (e.g., FPGAs or GPUs) to perform the compute-intensive Inner Tracker charged-particle reconstruction. Demonstrators based on commodity components have been developed to support the proposed architecture: a software-based fast-tracking demonstrator, an FPGA-based demonstrator, and a GPU-based demonstrator. Areas of study are highlighted in view of a final system for HL-LHC running.

        Speaker: Jiri Masik (University of Manchester (GB))
      • 14:30
        4D Track Reconstruction at the sPHENIX Experiment 25m

        The sPHENIX detector is a next generation QCD experiment being constructed for operation at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. sPHENIX will collect high statistics $p+p$, $p$+Au, and Au+Au data starting in 2023. The high luminosities that RHIC will deliver create a complex track reconstruction environment that is comparable to the High Luminosity LHC. Tens of thousands of hits need to be reconstructed into tracks associated with the primary collision and pile up interactions. To further complicate data taking, sPHENIX will operate a streaming read-out data acquisition system where data will be recorded without an explicit association to an event designated by a hardware trigger. To meet its physics requirements, sPHENIX has developed track reconstruction software using the A Common Tracking Software (ACTS) package that reconstructs tracks utilizing 3D measurements and timing information from the tracking detectors. In this talk, the sPHENIX 4D track reconstruction will be discussed in the context of triggered and streaming data taking modes.

        Speaker: Joe Osborn (Oak Ridge National Laboratory)
      • 14:55
        The ATLAS Inner Detector tracking trigger at 13 TeV in LHC Run-2 and new developments on standard and unconventional tracking signatures for the upcoming LHC Run-3 25m

        The performance of the Inner Detector tracking trigger of the ATLAS experiment at the LHC is evaluated for the data-taking period of Run-2 (2015-2018). The Inner Detector tracking was used for the muon, electron, tau, and b-jet triggers, and its high performance is essential for a wide variety of ATLAS physics programs such as many precision measurements of the Standard Model and searches for new physics. The detailed efficiencies and resolutions of the trigger in a wide range of physics signatures are presented for the Run 2 data.
        From the upcoming Run-3, starting in 2022, the application of Inner Detector tracking in the trigger is planned to be significantly expanded, in particular, full-detector tracking will be utilized for hadronic signatures (such as jets and missing transverse energy triggers) for the first time. To meet computing resource limitations, various improvements, including machine-learning-based track seeding, have been developed.

        Speaker: Jonathan Long (Univ. Illinois at Urbana Champaign (US))
    • 16:00 17:45
      Plenary PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Isobel Ojalvo (Princeton University (US))
      • 16:00
        Towards a Track Condensation Network 15m

        Tracker data is naturally represented as a graph by embedding tracker hits as nodes and hypothesized particle trajectories as edges. Edge-classifying graph neural networks (GNNs) have demonstrated powerful performance in rejecting unphysical edges from such graphs, yielding a set of disjoint subgraphs that ideally correspond to individual tracks. Post-processing modules, for example clustering algorithms like DBSCAN, are typically applied to the edge-weighted graphs to find tracks. In this work, we consider a learned approach track clustering in a GNN pipeline. Our results are based on object condensation, a multi-objective learning framework designed to perform clustering and property prediction in one shot. Key results will be shown at each stage of this pipeline, including graph construction, edge classification, track hit clustering, noise rejection, and track property prediction.

        Speaker: Gage DeZoort (Princeton University (US))
      • 16:20
        Graph Neural Network for Three Dimensional Object Reconstruction in Liquid Argon Time Projection Chambers 25m

        The Exa.TrkX project presents a graph neural network (GNN) technique for low-level reconstruction of neutrino interactions in a Liquid Argon Time Projection Chamber (LArTPC). GNNs are still a relatively novel technique, and have shown great promise for similar reconstruction tasks in the LHC. Graphs describing particle interactions are formed by treating each detector hit as a node, with edges describing the relationships between hits. We utilise a multihead attention message passing network which performs graph convolutions in order to label each node with a particle type.

        We previously demonstrated promising performance for our network. Using Delaunay triangulation as edge-forming technique, we tested different truth labeling schemes (a “full” version where each particle type is its own category, and a “simple” one that merges some of the categories by similar topologies), as well as different versions with or without edges connecting hits across planes (referred to as 3D and 2D networks, respectively). We found that the best accuracy is achieved by the 2D network, although the 3D network provides higher label consistency across planes.

        In this presentation, in addition to reviewing the network properties and goals, we will demonstrate further improvements in various aspects of the work. These include changes to the merging of categories for the simple truth labeling, the addition of an attention mechanism for 3D edges, and optimization of hyper parameters. After these additions, we will discuss further steps and assess the readiness of the network for usage in real-life neutrino experiments.

        Speaker: Kaushal Gumpula (UChicago)
      • 16:50
        Fast and flexible data structures for the LHCb run 3 software trigger 25m

        Starting this year, the upgraded LHCb detector will collection data with a pure software trigger. In its first stage, reducing the rate from 30MHz to about 1MHz, GPUs are used to reconstruct and trigger on B and D meson topologies and high-pT objects in the event. In its second stage, a CPU farm is used to reconstruct the full event and perform candidate selections, which are persisted for offline use with an output rate of about 10GB/s. Fast data processing, flexible and custom-designed data structures tailored for SIMD architectures and efficient storage of the intermediate data at various steps of the processing pipeline onto persistent media, e.g. tapes is essential to guarantee the full physics program of LHCb. In this talk, we will present the event model and data persistency developments for the trigger of LHCb in run 3. Particular emphasize will be given to the novel software-design aspects with respect to the Run 1+2 data taking, the performance improvements which can be achieved and the experience of restructuring a major part of the reconstruction software in a large HEP experiment.

        Speaker: Sevda Esen (Universitaet Zuerich (CH))
    • 19:00 21:00
      Dinner - Mistral restaurant Princeton downtown

      Princeton downtown

      Location: Mistral Restaurant (downtown, 1 block for Nassau Inn)

      Time: 7pm

    • 09:00 10:00
      Plenary PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Paolo Calafiura (Lawrence Berkeley National Lab. (US))
      • 09:00
        Tracking on GPU at LHCb’s fully software trigger 25m

        The LHCb experiment will use a fully software trigger to collect data from 2022 at an event rate of 30 MHz. During the first stage of High-Level Trigger (HLT1), a partial track reconstruction is performed on charged particles to select interesting events using efficient parallelisation techniques on GPU cards. This stage will already help reduce the event rates by at least a factor 30. Reconstructing tracks at a 30 MHz level represents a challenge which requires very efficient tracking algorithms and high parallelisation. The talk will particularly focus on tracking algorithms specialised on reconstructing particles traversing the whole LHCb detector.

        Speaker: Alessandro Scarabotto (Centre National de la Recherche Scientifique (FR))
      • 09:30
        Track reconstruction in the LUXE experiment using quantum algorithms 25m

        LUXE (Laser Und XFEL Experiment) is a proposed experiment at DESY which will study Quantum Electrodynamics (QED) in the strong-field regime, where QED becomes non-perturbative. The measurement of the rate of electron-positron pair creation, an essential ingredient to study this regime, is enabled by the use of a silicon tracking detector. Precision tracking of positrons traversing the four layers of the tracking detector becomes very challenging at high laser intensities due to the high rates, which can be computationally expensive for classical computers. In this talk, a preliminary study of the potential of quantum computers to reconstruct positron tracks will be presented. The reconstruction problem is formulated in terms of a Quadratic Unconstrained Binary Optimisation (QUBO), allowing it to be solved using quantum computers and hybrid quantum-classical algorithms such as Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimisation Algorithm (QAOA). Different ansatz circuits and optimisers are studied. The results are discussed and compared with classical track reconstruction algorithms using Graph Neural Network and Combinatorial Kalman Filter.

        Speaker: David Spataro
    • 10:00 11:00
      YSF Plenary PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Paolo Calafiura (Lawrence Berkeley National Lab. (US))
      • 10:00
        GPU-based algorithms for the CMS track clustering and primary vertex reconstruction for the Run 3 and Phase II of the LHC 15m

        The high luminosity expected from the LHC during the Run 3 and, especially, the Phase II of data taking introduces significant challenges in the CMS event reconstruction chain. The additional computational resources needed to treat this increased quantity of data surpass the expected increase in processing power for the next years.

        As a possible solution to this problem, CMS is investigating the usage of heterogeneous architectures, including both CPUs and GPUs, which can fulfill the processing needs of the Phase II online and offline reconstruction. A prototype system using this machinery has been already deployed and will be operated at the HLT reconstruction during the Run 3 data taking, both to prove the feasibility of the system and to gain additional experience in its usage towards the more challenging Phase II scenarios.

        Track clustering and primary vertex reconstruction takes a significant fraction of the reconstruction chain and involves similar computations over hundreds to thousands of reconstructed tracks. As a consequence, it is a natural candidate for the development of a GPU-based algorithm that parallelizes the later. We will discuss the status of such algorithm, and the challenges introduced by the need to reproduce the high performance already provided by the CPU-based version.

        Speaker: Carlos Francisco Erice Cid (Boston University (US))
      • 10:20
        Track Finding for the PANDA Experiment 15m

        The PANDA experiment at FAIR (Facility for Antiproton and Ion Research) in Darmstadt is a fixed target experiment currently under construction. The accelerator will be operated at energies from 1.5 GeV/c to 15 GeV/c to perform hadron specroscopy and nuclear structure studies. In this context, the production and decay of heavy baryons containing strange quarks, so called hyperons, is of particular interest.
        Track reconstruction is essential for hyperon detection, and this task is even more challenging because hyperons typically fly several centimeters before they decay. Therefore, secondary track finding plays a key role for PANDA. One of the most challenging parts of track finding in PANDA is the complex data typology. Usually, tracking algorithms use two-dimensional or three-dimensional hit points to perform a circle or a helix fit. PANDA also features time information to the 2D measurement, which result in cylindrical measurements that are tangent to the tracks.
        Different tracking algorithms dealing with these challenges will be presented. Furthermore, the reconstruction efficiency for a typical hyperon decay will be analyzed using simulated data. It will be shown that the reconstruction efficiency could be improved by 30% compared to the currently existing tracking algorithm in PANDA.

        Speaker: Anna Alicke
      • 10:40
        Exploration of different parameter optimization algorithms within the context of ACTS software framework 15m

        The particle track reconstruction is one of the most important part of the full event reconstruction chain and has a profound impact on the detector and physics performance. The underlying tracking software is also very complex and consists of a number of mathematically intense algorithms, each dealing with a particular tracking sub-process. These algorithms have many input parameters, to be supplied beforehand. However, it is very difficult to know the best configuration of these parameters that can yield in a highly efficient outcome. Currently, the input value of these parameters is decided mainly on the basis of prior experience and some brute force techniques. A parameter optimization approach that is able to automatically tune these parameters for high efficiency and low fake and duplicate rate is highly desirable. In this current study, we are exploring various machine learning based optimization methods to devise a suitable technique that can be used to optimize parameters in complex tracking environment. These methods are evaluated on the basis of a metric that targets high efficiency while keeping the duplicate and fake rates small. We are mainly focusing on the derivative free optimization approaches that can be applied to the problems involving non-differentiable loss functions. For our studies, we are considering the tracking algorithms defined within A Common Tracking Software (ACTS) framework. We are testing our methods using simulated data from ACTS software corresponding to the Generic detector and the ATLAS Inner Tracker (ITk) detector geometries.

        Speaker: Rocky Bala Garg (Stanford University (US))
    • 11:30 12:40
      YSF Plenary PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Paolo Calafiura (Lawrence Berkeley National Lab. (US))
      • 11:30
        Track Reconstruction using Geometric Deep Learning in the Straw Tube Tracker (STT) at the PANDA Experiment 15m

        The main purpose of the PANDA (anti-Proton ANnihilation at DArmstadt) experiment at FAIR (Facility for Anti-proton and Ion Research) is to study strong interactions at the scale where quarks form hadrons. In PANDA, a continuous beam of anti-protons ($\bar{p}$), 1.5 GeV/c to 15 GeV/c, will impinge on a fixed hydrogen ($p$) target inside the High Energy Storage Ring (HESR). This creates optimal conditions for various hadron physics studies, in particular for hyperon physics.

        To perform a physics study at PANDA, identifying and collecting interesting physics signals with PANDA starts with efficient particle track reconstruction. The track reconstruction process strongly depends on the detector geometry and the momenta of particles. PANDA's Straw Tube Tracker (STT) will be the main component for charged track reconstruction. It has a hexagonal geometry, consisting of 4224 gas-filled tubes arranged in 26 layers and six sectors. Together with low momentum ($100$ MeV/c up to $1.5$ GeV/c) particles, the track reconstruction becomes a considerable challenge for any reconstruction algorithm. In the low momentum region, the particle trajectories are strongly curved in the PANDA solenoid field. In my work, I investigate geometric deep learning (GDL) as a potential solution to this problem.

        In GDL, the Graph Neural Networks (GNNs) are expected not only to capture the non-Euclidean nature of detector geometry but also efficiently reconstruct complex particle tracks. In the current work, the track reconstruction in STT is performed in 2D i.e. in ($x, y$) using GNNs and findings will be presented at this conference.

        Speaker: Mr Adeel Akram (PANDA Collaboration)
      • 11:50
        ATLAS Inner Detector alignment towards Run 3 15m

        The algorithm used in the alignment of the Inner Detector of the ATLAS experiment is based on the track-to-hit residual minimization in a sequence of hierarchical levels (ranging from mechanical assembly structures to individual sensors). It aims to describe the detector geometry and its changes in time as accurately as possible, such that the resolution is not degraded by an imperfect positioning of the signal hit in the track reconstruction.

        The ID alignment during Run2 has proven to describe the detector geometry with a precision at the level of µm [1]. The hit-to-track residual minimization procedure is not sensitive to deformations of the detectors that affect the track parameters while leaving the residuals unchanged. Those geometry deformations are called weak modes. The minimization of the remaining track parameter biases and weak mode deformations has been the main target of the alignment campaign in the reprocessing of the Run2 data. New analysis methods for the weak mode measurement have been therefore implemented, providing a robust geometry description, validated by a wide spectrum of quality-assessment techniques. These novelties are foreseen to be the new baseline methods for the Run3 data-taking, in which the higher luminosity would allow an almost real-time assessment of the alignment performance.

        [1] Eur. Phys. J. C 80, 1194 (2020)

        Speakers: Alexander Thaler (University of Innsbruck (AT)), David Munoz Perez (Univ. of Valencia and CSIC (ES)), Javier Jimenez Pena (Max Planck Society (DE)), Mariam Chitishvili (Instituto de Física Corpuscular (IFIC)), Paolo Sabatini (Instituto de Fisica Corpuscular (IFIC), Centro Mixto Universidad de Valencia - CSIC)
      • 12:10
        Standalone track reconstruction and matching algorithms for GPU-based High level trigger at LHCb 15m

        The LHCb Upgrade in Run 3 has changed its trigger scheme for a full software selection in two steps. The first step, HLT1, will be entirely implemented on GPUs and run a fast selection aiming at reducing the visible collision rate from 30 MHz to 1 MHz.
        This selection relies on a partial reconstruction of the event. A version of this reconstruction starts with two monolithic tracking algorithms, the VELO-pixel tracking and the HybridSeeding on Scintillating-Fiber tracker, which reconstructs track segments in standalone sub-detectors. Those segments are then matched through a matching algorithm in order to produce 'long' tracks, which form the base of the HLT1 reconstruction.
        We discuss the principle of these algorithms as well as the details of their implementation which allows them to run at a high-throughput configuration. An emphasis is put on the optimizations of the algorithms themselves in order to take advantage of the GPU architecture. Finally, results are presented in the context of the LHCb performance requirements for Run 3

        Speaker: Mr Brij Kishor Jashal (Instituto de Física Corpuscular, Valencia)
    • 13:30 15:00
      Plenary PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Michel De Cian (EPFL - Ecole Polytechnique Federale Lausanne (CH))
      • 13:30
        Track Reconstruction at the Electron-Ion Collider 25m

        During the past year, substantial progress has been made on the design of Detector 1 for the Electron-Ion Collider (EIC). All proposed detector configurations used a combination of silicon trackers and gas detectors for particle tracking and vertex reconstruction. A DD4hep + Gaudi + Acts + EDM4hep simulation and reconstruction framework was developed by the ATHENA proposal collaboration to demonstrate a tracking performance consistent with the EIC physics requirements. In this talk, I will go over the ATHENA tracking system configuration, reconstruction studies with Acts, and our plan to further this effort for the official EIC Project 'Detector 1'.

        Speaker: Shujie Li (University of New Hampshire)
      • 14:00
        Application of machine learning in muon scattering tomography for better image reconstruction 25m

        Muon Scattering Tomography (MST) is a non-destructive imaging technique that uses cosmic ray muon to probe three-dimensional objects. It is based on the multiple Coulomb scattering suffered by the muons while crossing an object. Muons deflect and decelerated depending upon the density and the atomic number of the material of the object. Therefore, by studying the deflection of the muons, the information about the test object may be obtained. We plan to construct an MST setup using two sets of Resistive Plate Chambers (RPCs) for tracking muons before and after their interaction to identify the material of the test object. The RPC is preferred due to its simple design, ease of construction, cost-effective production of large detection area, along with very good temporal, spatial resolutions, and detection efficiency. NINO ASICs have been used as a discriminator and its Time Over Threshold (TOT) property has been used to achieve better position information. A Field Programmable Gate Array (FPGA) based multi-parameter Data Acquisition System (DAQ) has been developed in this context for the collection of position information from the tracking RPCs and subsequent track reconstruction. It offers a simple, low-cost, scalable readout solution for the present MST setup.

        In parallel, a numerical simulation has been carried out for optimizing the design and performance of the MST setup for material identification using the Geant4 package. Two sets of RPCs, each consisting of three RPCs detectors, have been placed above and below the test object. Cosmic Ray Library (CRY) has been used as a muon generator and the GEANT4 package with FTFP_BERT physics list has been implemented for simulating the muon interaction in the MST setup. Two track reconstruction algorithms, namely, the Point of Closest Approach (PoCA) and Binned Cluster Algorithm (BCA), have been used to compare their effectiveness in MST. These algorithms have been used to determine scattering vertices and scattering angles for each muon. In this project, we try to develop a material identification method using machine learning algorithms for better material identification and implement this in our MST setup. The performance of the technique may be compared to a few other methods, such as the Metric Discriminator method, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), Pattern Recognition Method (PRM), etc.

        Speaker: Dr Sridhar Tripathy (York College, New York)
      • 14:30
        Real-time alignment procedure at the LHCb experiment for Run3 25m

        The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on studying decays of c- and b-hadrons. For Run 3 of the LHC, LHCb will take data at an instantaneous luminosity of $2 × 10^{33} cm^{−2} s^{−1}$, five times higher than in Run 2 (2015-2018). To cope with the harsher data taking conditions, LHCb will deploy a purely software based trigger with a 30 MHz input rate. The software trigger at LHCb is composed of two stages: in the first stage the selection is based on a fast and simplified event reconstruction, while in the second stage a full event reconstruction is used. This gives room to perform a real-time alignment and calibration after the first trigger stage, allowing to have an offline-quality detector alignment in the second stage of the trigger. The detector alignment is an essential ingredient to have the best detector performance in the full event reconstruction. The alignment of the whole tracking system of LHCb is evaluated in real-time by an automatic iterative procedure. This is particularly important for the vertex detector, which is retracted for LHC beam injection and centered around the primary vertex position with stable beam conditions in each fill. Hence it is sensitive to position changes on fill-by-fill basis. To perform the real-time alignment and calibration of the detector a new framework that uses a multi-core farm has been developed. This framework allows the parallelization of the event reconstruction, while the evaluation of the constants is performed on a single node after collecting all the needed information from all the nodes. The procedure is fully automatic and running as soon as enough data are collected. The execution of the alignment tasks is under the control of the LHCb Experiment Control System, and it is implemented as a finite state machine. The data collected at the start of the fill are processed in a few minutes and used to update the alignment before running the second stage of the trigger. This in turn allows the trigger output data to be used for physics analysis without a further offline event reconstruction. The framework and the procedure for a real-time alignment of the LHCb detector in Run 3 are discussed from both the technical and operational point of view. Specific challenges of this strategy and foreseen performance are presented.

        Speaker: Florian Reiss (University of Manchester (GB))
    • 15:00 16:00
      Summary and Wrap-up PCTS conference room (4th floor) (Jadwin Hall, Princeton University)

      PCTS conference room (4th floor)

      Jadwin Hall, Princeton University

      Convener: Michel De Cian (EPFL - Ecole Polytechnique Federale Lausanne (CH))