Connecting The Dots 2018

US/Pacific
Physics-Astronomy Auditorium A118 (University of Washington Seattle)

Physics-Astronomy Auditorium A118

University of Washington Seattle

Shih-Chieh Hsu (University of Washington Seattle (US))
Description

This is a workshop on track reconstruction and other problems in pattern recognition in sparsely sampled data. The workshop is intended to be inclusive across other disciplines wherever similar problems arise. The main focus will be on pattern recognition and machine learning problems that arise e.g. in the reconstruction of particle tracks or jets in high energy physics experiments. 

This 2018 edition is the 4th of the Connecting The Dot series (see CTD2015 BerkeleyCTD2016 Vienna, WIT/CTD2017 LAL-Orsay).

The workshop is plenary sessions only,  with a mix of invited talks and accepted contributions. There will also be a Poster session.

Wifi is available on site, eduroam credentials, from your institution or CERN, are recommended (but not mandatory). 

Follow us on twitter @ctdwit , the official hashtag is #ctd2018 .

Proceedings will be peer reviewed by CTD committee and published in EPJWeb. The article needs to be prepared by using the template here and submitted  via  Indico interface by June 30. Each author also needs to fill the publication right form. The guidelines for number of pages is 10+/-2 for regular talk, 7+/-2 for Young Scientist Forum talk and 6+/-2 for Poster presentation.


Registration
CTD2018 registration
TrackML hackathon registration
Participants
  • Aayush Shah
  • Andreas Salzburger
  • Ashutosh Kotwal
  • Ben Kreis
  • Ben Nachman
  • Benjamin Freund
  • Carl Haber
  • Chengdong Fu
  • Christoph Hasse
  • David Rohr
  • David Rousseau
  • Elena Cuoco
  • Elena Cuoco
  • Emilia Leogrande
  • Fabrizio Palla
  • Felix Cormier
  • Felix Metzner
  • Gordon Watts
  • Heather Gray
  • Helge Egil Seime Pettersen
  • Henry Lubatti
  • Hossein Afsharnia
  • Hou Keong Lou
  • Illya Shapoval
  • Javier Mauricio Duarte
  • Joona Juhani Havukainen
  • Karolos Potamianos
  • Kostas Ntekas
  • Lindsey Gray
  • Louis-Guillaume Gagnon
  • Markus Elsing
  • Matevz Tadel
  • Matthias Danninger
  • Maurice Garcia-Sciveres
  • Mirco Huennefeld
  • Moritz Kiehn
  • natasha woods
  • Nhan Tran
  • Nicholas Styles
  • Nils Braun
  • Noemi Calace
  • Paolo Calafiura
  • Philip Coleman Harris
  • Rebecca Carney
  • Renato Quagliani
  • Roland Jansky
  • Rui Zou
  • S. Hamid Rezatofighi
  • Sabrina Amrouche
  • Salvador Marti I Garcia
  • Samu Taulu
  • Scott Hauck
  • Sebastien Rettie
  • Shashikant Raichand Dugad
  • Shih-Chieh Hsu
  • Simone Pagan Griso
  • Steven Farrell
  • Timon Heim
  • Tomasz Trzcinski
  • Valentin Volkl
  • valerie Brouillard
  • Vishnu Nandakumar
  • William McCormack
  • Wooseok Jeung
  • Yao Zhang
  • Yashar Hezaveh
  • Ye Yuan
  • Yen-Chi Chen
  • Yeranuhi Ghandilyan
  • Zbynek Drasal
  • Zeyu Liu
  • Zhengcheng Tao
    • 08:30 08:50
      Registration 20m Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

    • 08:50 12:30
      Session1 Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

      Convener: Markus Elsing (CERN)
      • 08:50
        Welcome 10m
        Speaker: Shih-Chieh Hsu (University of Washington Seattle (US))
      • 09:00
        Developments in pileup suppression techniques at the LHC 25m

        The LHC accelerator is running at unprecedented high instantaneous
        luminosities, allowing the experiments to collect a vast amount of
        data. However this ashonishing performance comes with a
        larger-than-designed number of interactions per crossing of proton
        bunches (pile-up). During 2017 values up to 60 interactions per bunch
        crossing were routinely achieved and capped by the ability of
        experiments to cope with such large occupancy. In the future an
        upgraded LHC accelerator (HL-LHC) is expected to routinely provide
        even larger instantaneous luminosities, with up to 200 interactions
        per bunch crossing. Disentangling the information from a single
        interesting proton-proton collision to the others happening in the
        same bunch crossing is of critical importance to retain high accuracy
        in physics measurements, and it is commonly referred to as pile-up
        suppression. In this talk I will review the main challenges and needs
        for pileup suppression at the LHC, mostly focusing on the ATLAS and
        CMS experiments; I will highlight the techniques used so far and what
        is planned in order to cope with the even larger pile-up expected at
        the HL-LHC.

        Speaker: Simone Pagan Griso (University of California Berkeley (US))
      • 09:30
        First Steps Towards Four-Dimensional Tracking: Timing Layers at the HL-LHC 25m

        The projected proton beam intensity of the High Luminosity Large Hadron Collider (HL-LHC), slated to begin operation in 2026, will result in between 140 and 200 concurrent proton-proton interactions per 25 ns bunch crossing. The scientific program of the HL-LHC, which includes precision Higgs coupling measurements, measurements of vector boson scattering, and searches for new heavy or exotic particles, will benefit greatly from the enormous HL-LHC dataset. However, particle reconstruction and correct assignment to primary interaction vertices presents a formidable challenge to the LHC detectors that must be overcome in order to reap that benefit. Time tagging of minimum ionizing particles (MIPs) produced in LHC collisions with a resolution of 30 ps provides further discrimination of interaction vertices in the same 25 ns bunch crossing beyond spatial tracking algorithms. The Compact Muon Solenoid (CMS) and ATLAS Collaborations are pursuing in total two technologies to provide MIP time tagging for the HL-LHC detector upgrade: LYSO:Ce crystals read out by silicon photomultipliers (SiPMs) for low radiation areas (CMS only) and silicon low gain avalanche detectors (LGADs, CMS and ATLAS) for high radiation areas. This talk will motivate the need for a dedicated timing layer in the CMS and ATLAS upgrades, describe the two technologies and their performance, and present simulations showing the improvements in reconstructed observables afforded by four dimensional tracking.

        Speaker: Lindsey Gray (Fermi National Accelerator Lab. (US))
      • 10:00
        Machine Learning for transient noise event classification in LIGO and Virgo 25m

        Noise of non-astrophysical origin contaminates science data taken by the Advanced Laser Interferometer Gravitational-wave Observatory and Advanced Virgo gravitational-wave detectors. Characterization of instrumental and environmental noise transients has proven critical in identifying false positives in the first observing runs. Machine-Learning techniques have, in recent years, become more and more reliable and can be efficiently applied to our problems.

        Different teams in LIGO/Virgo have applied machine-learning and deep-Learning methods to different aims, from control-lock acquisition, to GW-Signal detection, to noise-Event classification.

        After a general introduction to the LIGO and Virgo detectors and the Data-Analysis framework, I will describe how machine learning methods are used in Transient-Signal classification. Following an introduction to the problem, I will go through the main algorithms and the technical solutions which we have efficiently used up to now and how we plan to develop the idea in the future.

        Speakers: Elena Cuoco, Elena Cuoco (INFN - National Institute for Nuclear Physics), Dr Elena Cuoco (EGO & INFN Pisa)
      • 10:30
        Coffee break 30m
      • 11:00
        Fast Reconstruction and Data Scouting 25m
        Speaker: Javier Mauricio Duarte (Fermi National Accelerator Lab. (US))
      • 11:30
        The Fast TracKer - A hardware track processor for the ATLAS trigger system 25m

        The Fast Tracker (FTK) is a hardware upgrade to the ATLAS trigger and data acquisition system providing global track reconstruction to the High-Level Trigger (HLT) with the goal to improve pile-up rejection. The FTK processes incoming data from the Pixel and SCT detectors (part of the Inner Detector, ID) at up to 100 kHz using custom electronic boards. ID hits are matched to pre-defined track patterns stored in associative memory (AM) on custom ASICs and data routing, reduction and parameter extraction is achieved with processing on FPGAs. With 8000 AM chips and 2000 FPGAs, the FTK provides enough resources to reconstruct tracks with transverse momentum greater than 1 GeV/c in the whole tracking volume with an average latency below 100 microseconds at collisions intensities expected in Runs II and III of the LHC. The tracks will be available at the beginning of the trigger selection process, which allows development of pile-up resilient triggering strategies to identify b-quarks and tau-leptons, as well as providing the potential to devise new selections to look for particular signatures (e.g. displaced vertices) in the search for New Physics phenomena.

        This presentation describes the FTK system, with a particular emphasis on its massive parallelization capabilities, its installation and commissioning in 2016 and 2017, and the first data-taking experience including performance measurements.

        Speaker: Karolos Potamianos (Deutsches Elektronen-Synchrotron (DE))
      • 12:00
        Neural Networks in FPGAs for Trigger and DAQ 25m

        Machine learning methods are becoming ubiquitous across the LHC and particle physics. However, the exploration of such techniques within the field in low latency, low power FPGA hardware has only just begun. There is great potential to improve trigger and data acquisition performance, more generally for pattern recognition problems, and potentially beyond. We present a case study for using neural networks in FPGAs. Our study takes jet substructure as an example since it is a field familiar with machine learning, but lessons are far-reaching. We map out resource usage and latency versus types of machine learning algorithms and their hyper-parameters to identify the problems in particle physics that would benefit. We develop a package based on High Level Synthesis (HLS) to build network architectures which is readily accessible to a broad user base.

        Speaker: Nhan Viet Tran (Fermi National Accelerator Lab. (US))
    • 12:30 14:00
      Lunch break 1h 30m Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

    • 14:00 16:00
      Session2 Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

      Convener: Fabrizio Palla (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, P)
      • 14:00
        Level-1 Track Finding with all-FPGA system at CMS for the HL-LHC 25m

        With the high luminosity upgrade of LHC, incorporating tracking information into the CMS Level-1 trigger becomes necessary in order to maintain a manageable trigger rate. The main challenges Level-1 track finding faces are the large data throughput from the detector at the collision rate of 40 MHz and 4 μs time budget to reconstruct charged particle tracks with sufficiently low transverse momentum to be used in Level-1 trigger decision. Dedicated all-FPGA hardware systems with time-multiplexed architecture have been developed for track finding to deal with these challenges. The algorithm and performance of the pattern recognition and particle trajectory determination are discussed in this talk. The implementation on customized boards and commercially available FPGAs are presented as well.

        Speaker: Zhengcheng Tao (Cornell University (US))
      • 14:30
        LHCB Trigger Upgrade 25m
        Speaker: Renato Quagliani (Centre National de la Recherche Scientifique (FR))
      • 15:00
        Fast track segment finding in the Monitored Drift Tubes (MDT) of the ATLAS Muon Spectrometer using a Legendre transform algorithm 25m

        Many of the physics goals of ATLAS in the High Luminosity LHC era,
        including precision studies of the Higgs boson, require an unprescaled
        single muon trigger with a 20 GeV threshold. The selectivity of the
        current ATLAS first-level muon trigger is limited by the moderate
        spatial resolution of the muon trigger chambers. By incorporating the
        precise tracking of the MDT, the muon transverse momentum can be
        measured with an accuracy close to that of the offline reconstruction at
        the trigger level, sharpening the trigger turn-on curves and reducing
        the single muon trigger rate. A novel algorithm is proposed which
        reconstructs segments from MDT hits in an FPGA and find tracks within
        the tight latency constraints of the ATLAS first-level muon trigger. The
        algorithm represents MDT drift circles as curves in the Legendre space
        and returns one or more segment lines tangent to the maximum possible
        number of drift circles.  This algorithm is implemented without the need
        of resource and time consuming hit position calculation and track
        fitting procedures. A low-latency pure-FPGA implementation of a Legendre
        transform segment finder will be presented. This logic evaluates in
        parallel a total of 128 possible track segment angles for each MDT drift
        circle, calculating in a fast FPGA pipeline the offset of each segment
        candidate from an arbitrary origin for each angle and circle. The
        (angle, offset) pairs, corresponding to the MDT drift circles in one
        station, are used to fill a 2D histogram and the segment finder returns
        the position and angle of the maximum peak, corresponding to the most
        likely tangent line, this defines the reconstructed segment. Segments
        are then combined to calculate the muon's transverse momentum with a
        parametric approach which accounts for varying magnetic field strength
        throughout the muon spectrometer.

        Speaker: Kostas Ntekas (University of California Irvine (US))
      • 15:30
        Coffee break 30m
    • 16:00 19:15
      Poster Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

      • 16:00
        Using machine learning algorithms for Quality Assurance in ALICE experiment 15m

        Data Quality Assurance (QA) is an important aspect of every High-Energy Physics experiment, especially in the case of the ALICE experiment at the Large Hadron Collider (LHC) whose detectors are extremely sophisticated and complex devices. To avoid processing low quality or redundant data, and to classify it for analysis, human experts are currently involved in an offline assessment of the quality of the data itself and of the detectors' health during the collisions' recording. Since this assessment process is cumbersome and time-consuming, it typically takes experts days or months to assess the quality of the past data taking periods. Furthermore, since the ALICE Experiment is going to undergo a major upgrade in the coming years and it is planned to record much more data at higher frequency, manual data quality and detector health checks will simply not be feasible.

        This is exactly the environment where machine learning can be utilized to its full extent. Based on the recent advancement in the field of machine learning and pattern recognition, we conducted several experiments that aim at automating QA for for the ALICE experiment. More specifically, we collected a multi-dimensional dataset of attributes recorded during over 1’000 data taking periods by the Time Projection Chamber (TPC) and the corresponding quality labels. We normalized the data to disregard temporal dependencies as well as to minimize the noise. We cast our problem as a classification task whose goal is to assess the quality of the data collected by TPC detector. Since the space of assigned quality labels is very sparse, we simplified the multi-class problem to a binary classification task, by considering all bad and unlabeled data points as ‘suspicious’, while the remaining data portion was labeled as good. This normalization was recommended by detectors’ experts, since the lack of labels is typically caused by unprecedented characteristics of the detector data.

        The resulting binary classification task can be solved by several state-of-the-art machine learning algorithms. In our experiments, we have used both traditional shallow classification architectures, such as random trees and SVM classifiers, as well as modern neural network architectures. To ensure the generalization of our results and the robustness of the evaluated methods, we followed k-fold cross-validation procedure with k=10. The obtained results in terms of False Positive rate of less than 2% indicate that machine learning algorithms can be directly used to automatically detect the suspicious runs and hence reduce the human burden related to this task.

        Our future research includes extending the analysis to other detectors’ data, focusing first at those where the quality assessment procedure is more time-consuming. We then plan to investigate application of unsupervised machine learning methods to detect anomalies in detectors’ data in real-time.

        Speaker: Dr Tomasz Trzcinski for the ALICE Collaboration (Warsaw University of Technology)
      • 16:15
        A novel deep neural network classifier for assessing track quality in the Iterative Track Reconstruction at CMS 15m

        In the track reconstruction in the CMS software, particle tracks are determined using a Combinatorial Track Finder algorithm. In order to optimize the speed and accuracy of the algorithm, the tracks are reconstructed using an iterative process: Easiest tracks are searched first, then hits, associated to good found tracks, are excluded from consideration in the following iterations (masking) before continuing with the next iteration. At the end of each iteration, a track selection is performed to classify different tracks depending on their quality. Currently we use classifiers (one for each iteration) based on a shallow Boosted-Decision-Tree whose inputs variables are track features, such as the goodness-of-fit and number of hits. To enhance the performance of this classification, we have developed a novel classifier based on a deep neural network trained using the TensorFlow framework. This new technique not only performs better, it also has the advantage to use a single classifier for all iterations: this simplifies the task of retraining the classifier and to maintain its high performance in the changing conditions of the detector. In this talk we will present the characteristic and performance of the new deep neural network classifier. We will also discuss the impact on both training and inference of changing some of the properties of the network such as topology, score function and input parameters.

        Speaker: Joona Juhani Havukainen (Helsinki Institute of Physics (FI))
      • 16:30
        A novel standalone track reconstruction algorithm for the LHCb upgrade 15m

        During the LHC Run III, starting in 2020, the instantaneous luminosity of LHCb will be increased up to $2\times10^{33}$ cm$^{-2}$ s$^{-1}$, five times larger than in Run II. The LHCb detector will then have to be upgraded in 2019. In fact, a full software event reconstruction will be performed at the full bunch crossing rate by the trigger, in order to profit of the higher instantaneous luminosity provided by the accelerator. In addition, all the tracking devices will be replaced and, in particular, a scintillating fiber tracker (SciFi) will be installed after the magnet, allowing to cope with the higher occupancy. The new running conditions, and the tighter timing constraints in the software trigger, represent a big challenge for the track reconstruction.
        This talk presents the design and performance of a novel algorithm that has been developed to reconstruct track segments using solely hits from the SciFi. This algorithm is crucial for the reconstruction of tracks originating from long-lived particles such as $K_S$ and $\Lambda$. The implementation strategy is based on a progressive cleaning of the tracking environment and on an active use of the information from the stereo hits in order to select tracks. It also profit from the definition of an improved track parameterization. When compared to its previous implementation, the new algorithm has significantly higher performances in terms of efficiency, number of fake tracks and timing, allowing to enhance the physics potential and capabilities of the LHCb upgrade.

        Speaker: Mr Renato Quagliani (Centre National de la Recherche Scientifique (FR))
      • 16:45
        The ATLAS Inner Detector track based alignment 15m

        The alignment of the ATLAS Inner Detector is performed with a track-based alignment algorithm.
        Its goal is to provide an accurate description of the detector geometry such
        that track parameters are accurately determined and free from biases.
        Its software implementation is modular and configurable,
        with a clear separation of the alignment algorithm from the detector system specifics and the database handling.

        The alignment must cope with the rapid movements of the detector
        as well as with the slow drift of the different mechanical units.
        Prompt alignment constants are derived for every run at the calibration stage.
        These sets of constants are then dynamically split from the beginning of the run in many chunks,
        allowing to describe the tracker geometry as it evolves with time.

        The alignment of the Inner Detector is validated and improved by studying resonance decays (Z and J/psi to mu+mu-),
        as well as using information from the calorimeter system with the E/p method with electrons.
        A detailed study of these resonances (together with the prpperties of their decay products)
        allows correcting for alignment weak modes such as detector curls, twists or radial
        deformations that may bias the momentum and/or the impact parameters.
        On the other hand, the detailed scrutiny of the track-hit residuals serves to asses the shape of the Pixels and IBL modules.

        Speaker: Salvador Marti I Garcia (IFIC-Valencia (UV/EG-CSIC))
      • 17:00
        Machine Learning When you Know (Basically) Nothing. 15m

        Machine learning in high energy physics relies heavily on simulation for fully supervised training. This often results in sub-optimal classification when ultimately applied to (unlabeled) data. At CTD2017, we showed how to avoid this problem by training directly on data using as input the fraction of signal and background in each training sample. We now have a new method that does not even require these fractions called Classification Without Labels (CWoLa). In addition to explaining this new method, we show for the first time how to apply these techniques to high-dimensional data, where significant architectural changes are required.

        Speaker: Ben Nachman (Lawrence Berkeley National Lab. (US))
      • 17:15
        Proton Track Reconstruction Inside a Digital Tracking Calorimeter for Proton CT 15m

        Background
        Proton CT is a prototype imaging modality for the reconstruction of the Proton Stopping Power inside a patient for more accurate calculations of the dose distributions in proton therapy treatment dose planplanning systems. A prototype proton CT system, called the Digital Tracking Calorimeter (DTC) is currectly under development where aluminum energy absorbers are sandwiched between ~40 MAPS-based pixel sensor layers.

        The following measurements need to be performed in the DTC:

        • The initial proton vector incident on the front face of the detector
        • The stopping depth of each proton in the detector

        These measurements necessarily require performing track identification and reconstruction inside the DTC. The track reconstruction will also allow disentangling a large number of protons thus contributing to increased rate capabilities of the DTC.

        Methods

        Methods
        A DTC detector has been modeled using the GATE 7.2 Monte Carlo framework. A design having 3.5 mm aluminum absorbers has been suggested based on range accuracy requirements: The detector is modeled based on previous experience with track identification and reconstruction in similar prototypes, such as with the ALICE-FoCal experiment. A water phantom of variable thickness is used for degrading a 230 MeV proton beam to different energies.

        Upon degradation by a water phantom, a proton beam of a up to a few thousands particles in a 100 cm2 area and a mean energy of ~200 MeV is incident on the detector. In this regime, the proton tracks are heavily influenced by multiple Coulomb scattering. Each primary proton is tracked through the detector using a track-following approach, with a search cone depending on the expected scattering. A high quality tracking algorithm improves the detector characteristics in terms of contributing to increased rate capabilities, i.e. a higher incident beam intensity. The tracking quality is evaluated at various incident proton densities [protons / cm2] by the fraction of tracks with correct endpoints. Quantitative evaluation is based on a comparison between the correctly identified and reconstructed proton tracks using the current algorithm and the true proton tracks from Monte Carlo simulations.

        Results and conclusion
        Preliminary results indicate that at 10 protons / cm2 about 80% of the tracks are correctly identified and reconstructed, the remaining 20% are either “close misreconstructions” or protons that undergo large angle scatter and thus are are wrongly identified.

        Speaker: Helge Egil Seime Pettersen (University of Bergen (NO))
      • 17:30
        HL-LHC ATLAS Strip System Robustness 15m

        The High Luminosity LHC (HL-LHC) plans to increase the LHC dataset by an order of magnitude, increasing the potential for new physics discoveries. The HL-LHC upgrade, planned for 2025 will increase the peak luminosity to 7.5×10^34cm^-2s^-1, corresponding to ~200 inelastic proton-proton collisions per beam crossing. To mitigate the increased radiation doses and pileup, the ATLAS Inner Detector will be replaced with an all-Silicon Inner Tracker made of Silicon Pixel and Strip systems. During the life-time of the HL-LHC, failures in the Strip system due to electronic and cooling failures and irradiation damage are expected. Estimating the effects of such failures is necessary to ensure the Strip system design is robust. In this poster the effects of the failures in the Strip system on the tracking performance are presented. With the planned ATLAS Strip system design the tracking efficiency, fake rates and resolutions are found to be robust to the anticipated failures.

        Speakers: Natasha Lee Woods (University of California,Santa Cruz (US)), Ms Natasha Woods (UCSC)
      • 17:45
        Particle Flow and PUPPI in the Level-1 trigger at CMS for the HL-LHC 15m

        With the planned addition of the tracking information in the Level 1 trigger in CMS for the HL-LHC, the algorithms for Level 1 trigger can be completely reconceptualized. Following the example for offline reconstruction in CMS to use complementary subsystem information and mitigate pileup, we explore the feasibility of using Particle Flow-like and pileup per particle identification techniques at the hardware trigger level. This represents a new type of multi-subdetector pattern recognition challenge for the HL-LHC. We present proof-of-principle studies on both physics and resource usage performance of a prototype algorithm for use by CMS in the HL-LHC era.

        Speaker: Ben Kreis (Fermi National Accelerator Lab. (US))
      • 18:00
        Ultimate position resolution of pixel clusters with binary readout for particle tracking 20m

        Silicon tracking detectors can record the charge in each channel (analog or digitized) or have only binary readout (hit or no hit). While there is significant literature on the position resolution obtained from interpolation of charge measurements, a comprehensive study of the resolution obtainable with binary readout is lacking. It is commonly assumed that the binary resolution is pitch/sqrt(12), but this is generally a worst case upper limit. In this paper we study, using simulation, the best achievable resolution for minimum ionizing particles in binary readout pixels. A wide range of incident angles and pixel sizes are simulated with a standalone code, using the Bichsel model for charge deposition. The results show how the resolution depends on angles and sensor geometry. Until the pixel pitch becomes so small as to be comparable to the distance between energy deposits in silicon, the resolution is always better, and in some cases much better, than pitch/sqrt(12)

        Speaker: Maurice Garcia-Sciveres (Lawrence Berkeley National Lab. (US))
      • 18:20
        Optimal use of Charge Information for the HL-LHC Pixel Detector Readout 20m

        The pixel detectors for the High Luminosity upgrades of the ATLAS and CMS detectors will preserve digitized charge information in spite of extremely high hit rates. Both circuit physical size and output bandwidth will limit the number of bits to which charge can be digitized and stored. We therefore study the effect of the number of bits used for digitization and storage on single and multi-particle cluster resolution, efficiency, classification, and particle identification. We show how performance degrades as fewer bits are used to digitize and to store charge. We find that with limited charge information (4 bits), one can achieve near optimal performance on a variety of tasks.

        Speaker: Maurice Garcia-Sciveres (Lawrence Berkeley National Lab. (US))
      • 18:40
        Splitting Strip Detector Clusters in Dense Environments 20m

        Tracking in high density environments, particularly in high energy jets, plays an important role in many physics analyses at the LHC. In such environments, there is significant degradation of track reconstruction performance. Between runs 1 and 2, ATLAS implemented an algorithm that splits pixel clusters originating from multiple charged particles, using charge information, resulting in the recovery of much of the lost efficiency. However, no attempt was made in prior work to split merged clusters in the Semi Conductor Tracker (SCT), which does not measure charge information. In spite of the lack of charge information in SCT, a cluster-splitting algorithm has been developed in this work. It is based primarily on the difference between the observed cluster width and the expected cluster width, which is derived from track incidence angle. The performance of this algorithm is found to be competitive with the existing pixel cluster splitting based on track information.

        Speaker: William Patrick Mccormack (University of California Berkeley (US))
    • 16:00 19:15
      Seattle TrackML Hackathon 6th Floor Physics-Astronomy Tower (eScience Institute WRF Data Science Studio)

      6th Floor Physics-Astronomy Tower

      eScience Institute WRF Data Science Studio

      Conveners: Andreas Salzburger (CERN), David Rousseau (LAL-Orsay, FR), Moritz Kiehn (Universite de Geneve (CH))
    • 16:30 17:30
      Lab Tour UW ME/CENPA

      UW ME/CENPA

    • 19:30 20:45
      Public Lecture: Seeing Voices by Carl Haber 1h 15m Kane Hall 120

      Kane Hall 120

    • 20:45 22:30
      Reception 1h 45m Kane Hall 225 (Walker-Ames) (UW)

      Kane Hall 225 (Walker-Ames)

      UW

    • 09:00 12:20
      Session3 Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

      Convener: David Rousseau (LAL-Orsay, FR)
      • 09:00
        Fast automated analysis of strong gravitational lenses with convolutional neural networks 25m

        Strong gravitational lensing is a phenomenon in which images of distant galaxies appear highly distorted due to the deflection of their light rays by the gravity of other intervening galaxies. We often see multiple distinct arc-shaped images of the background galaxy around the intervening (lens) galaxy, like images in a funhouse mirror. Strong lensing gives astrophysicist a unique opportunity to carry out different investigations, including mapping the detailed distribution of dark matter or measuring the expansion rate of the universe. All these studies, however, require a detailed knowledge of the distribution of matter in the lensing galaxies, measured from the distortions in the images. This has been traditionally performed with maximum-likelihood lens modeling, a procedure in which simulated observations are generated and compared to the data in a statistical way. The parameters controlling the simulations are then explored with samplers like MCMC. This is a time and resource consuming procedure, requiring hundreds of hours of computer and human time for a single system. In this talk, I will discuss our recent work in which we showed that deep convolutional neural networks can solve this problem more than 10 million times faster: about 0.01 seconds per system on a single GPU. I will also review our method for quantifying the uncertainties of the parameters obtained with these networks. With the advent of upcoming sky surveys such as the Large Synoptic Survey Telescope, we are anticipating the discovery of tens of thousands of new gravitational lenses. Neural networks can be an essential tool for the analysis of such high volumes of data.

        Speaker: Dr Yashar Hezaveh (Stanford University)
      • 09:30
        Online Event Reconstruction in IceCube Using Deep Learning Techniques 25m

        The IceCube Neutrino Observatory is a Cherenkov detector deep in the Antarctic ice. Due to limited computational resources and the high data rate, only simplified reconstructions restricted to a small subset of data can be run on-site at the South Pole. However, in order to perform online analyses and to issue real-time alerts, fast and powerful reconstructions are desired.

        Recent advances, especially in image recognition, have shown the capabilities of deep learning. Deep neural networks can be extremely powerful and their usage is computationally inexpensive once the networks are trained. These characteristics make a deep learning-based reconstruction an excellent candidate for the application on-site at the South Pole. In contrast to image recognition tasks, the reconstruction in IceCube poses additional challenges as the data is four-dimensional, highly variable in length, and distributed on an imperfect triangular grid.

        A deep learning-based reconstruction method is presented which can significantly increase the reconstruction accuracy while reducing the runtime in comparison to standard reconstruction methods in IceCube.

        Speaker: Mr Mirco Hünnefeld (TU Dortmund, IceCube)
      • 10:00
        Improving jet substructure performance in ATLAS with unified tracking and calorimeter inputs 25m

        Jet substructure techniques play a critical role in ATLAS in searches for new physics, and are being utilized in the trigger. They become increasingly important in detailed studies of the Standard Model, among them the inclusive search for the Higgs boson produced with high transverse momentum decaying to a bottom-antibottom quark pair. To date, ATLAS has mostly focused on the use of calorimeter-based jet substructure, which works well for jets initiated by particles with low to moderate boost, but which lacks the angular resolution needed to resolve the desired substructure in the highly-boosted regime.

        We will present a novel approach designed to mitigate the calorimeter angular resolution limitations, thus providing superior performance to prior methods. Similar to previous methods, the superior angular resolution of the tracker is combined with information from the calorimeters. However, the new method is fundamentally different, as it correlates low-level objects as tracks and individual energy deposits in the calorimeter, before running any jet finding algorithms. The resulting objects are used as inputs to jet reconstruction, and in turn result in improved resolution for both jet mass and substructure variables. It will be discussed how these jets could prove to be robust against pile-up due to the pile-up rejection capabilities of the tracker.

        Speaker: Noemi Calace (Universite de Geneve (CH))
      • 10:30
        Coffee break 30m
      • 11:00
        Online Multi-target Tracking using Recurrent Neural Networks 25m

        We present a novel approach to online multi-target tracking
        based on recurrent neural networks (RNNs). Tracking multiple
        objects in real-world scenes involves many challenges,
        including a) an a-priori unknown and time-varying number of
        targets, b) a continuous state estimation of all present targets,
        and c) a discrete combinatorial problem of data association.
        Most previous methods involve complex models that require
        tedious tuning of parameters. Here, we propose for the first
        time, a full end-to-end learning approach for online multitarget
        tracking based on deep learning. Existing deep learning
        methods are not designed for the above challenges and cannot
        be trivially applied to the task. Our solution addresses all
        of the above points in a principled way. Experiments on both
        synthetic and real data show competitive results obtained at
        ?300 Hz on a standard CPU, and pave the way towards future
        research in this direction.

        Speaker: Hamid Rezatofighi
      • 11:30
        Expected performance of tracking and vertexing with the HL-LHC ATLAS detector 25m

        The High Luminosity LHC (HL-LHC) aims to increase the LHC data-set by an order of magnitude in order to increase its potential for discoveries. Starting from the middle of 2026, the HL-LHC is expected to reach the peak instantaneous luminosity of $7.5\cdot10^{34}cm^{-2}s^{-1}$ which corresponds to about 200 inelastic proton-proton collisions per beam crossing. To cope with the large radiation doses and high pileup, the current ATLAS Inner Detector will be replaced with a new all-silicon Inner Tracker. In this talk the expected performance of tracking and vertexing with the HL-LHC tracker is presented. Comparison is made to the performance with the Run2 detector. Ongoing developments of the track reconstruction for the HL-LHC are also discussed.

        Speaker: Noemi Calace (Universite de Geneve (CH))
      • 12:00
        Implementation and Performance of FPGA based track fitting for the Atlas Fast TracKer 15m

        The Fast TracKer (FTK) within the ATLAS trigger system provides global track reconstruction for all events passing the ATLAS Level 1 trigger by dividing the detector into parallel processing pipelines that implement pattern matching in custom integrated circuits and data routing, reduction, and parameter extraction in FPGAs. In this presentation we will describe the implementation of a critical component of the system which does partial track fitting using a method based on a principal component analysis at a rate of greater than 1 fit per 10 ps, system-wide, to reduce the output of the pattern matching. Firmware design, timing performance and preliminary results will be discussed.

        Speaker: Rui Zou (University of Chicago (US))
    • 12:20 12:40
      group picture 20m Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

    • 12:40 14:00
      Lunch break 1h 20m Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

    • 14:00 16:10
      Session4 Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

      Convener: Andreas Salzburger (CERN)
      • 14:00
        Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks with Accurate Detector Geometry 25m

        In the era of the High-Luminosity Large Hadron Collider (HL-LHC), one of the most computationally challenging problems is expected to be finding and fitting particle tracks during event reconstruction. The algorithms currently in use at the LHC are based on Kalman filter techniques, which are known to be robust and provide good physics performance. Given the need for improved computational performance, we explore Kalman-filter-based methods for track finding and fitting that are specially adapted for many-core SIMD and SIMT architectures, since processors of this type are becoming increasingly dominant in high-performance hardware.

        For both track fitting and track building, our adapted Kalman filter software has obtained significant parallel speedups on Intel Xeon, Intel Xeon Phi, Intel Xeon, and (to a limited degree) NVIDIA GPUs. Results from our prior reports, however, were more focused on simulations of artificial events taking place inside an idealized barrel detector composed of concentric cylinders. In the current work, we shift focus to CMSSW-generated events taking place inside a geometrically accurate representation of the CMS-2017 tracker. To a large extent, the approaches that were previously developed for the idealized geometry have carried over to the more accurate case. For instance, groups of candidate tracks are still propagated to the average radius (or average axial distance) of the next detector layer; once the matching hits in that layer have been identified, candidate tracks are re-propagated to the exact hit locations and tested for viability. Special treatment is given to the overlap or transition region between barrel and endcaps, so that matching hits can be picked up from either area as required.

        We summarize the key features of this software, including (1) the data structures and code constructs that facilitate vectorization and SIMT, and (2) the multiple levels of parallel loops that have been multithreaded using TBB. We demonstrate that, as compared to CMSSW, the present Kalman filter implementation is able to reconstruct events with comparable physics performance and generally better computational performance. The status of, and plans for, the software are discussed.

        Speaker: Matevz Tadel (Univ. of California San Diego (US))
      • 14:30
        Track Reconstruction in the ALICE TPC using GPUs for LHC Run 3 25m

        In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous read out of minimum bias Pb-Pb collisions.
        The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage.
        We present a tracking algorithm for the Time Projection Chamber (TPC), the main tracking detector of ALICE.
        The reconstruction must yield results comparable to current offline reconstruction and meet time constraints like in the current High Level Trigger (HLT), processing 50 times as many collisions per second as today.
        It is derived from the current online tracking in the HLT, which is based on a Cellular automaton and the Kalman filter, and we integrate missing features from offline tracking for improved resolution.
        The continuous TPC read out and overlapping collisions pose new challenges: conversion to spatial coordinates and application of time- and location dependent calibration must happen in between of track seeding and track fit while TPC occupancy increases five-fold.
        The huge data volume requires a data reduction factor of 20, which imposes additional requirements: the momentum range must be extended to identify low-Pt looping tracks and a special refit in uncalibrated coordinates improves the track model entropy encoding.
        Our TPC track finding leverages the potential of hardware accelerators via the OpenCL and CUDA APIs in a shared source code for CPUs, GPUs, and both reconstruction stages.
        Porting more reconstruction steps like the remainder of the TPC reconstruction and tracking for other detectors will shift the computing balance from traditional processors to GPUs.
        We give an overview of the status of Run 3 tracking including track finding efficiency, resolution, treatment of continuous read out data, and performance on processors and GPUs.

        Speaker: David Rohr for the ALICE Collaboration (CERN)
      • 15:00
        Tracking Algorithms in the Belle II Drift Chamber with first pilot run results 15m

        Belle II - located at the $e^+e^-$ collider SuperKEKB operating at the $\Upsilon (4\mathrm S)$ energy - starts its first data taking run in February 2018.
        Its ultimate goal is to measure with high precision multifaceted quantities in the flavor-sphere and explore the many opportunities beyond, e.g. exotic hadronic states, afforded by its record-breaking instantaneous luminosity of $8\cdot 10^{35} cm^{-2}s^{-1}$.

        Belle II's tracking system consists of a DEPFET pixel device with very little material, a fast silicon strip detector, and a drift chamber of more than 2 meter diameter. Performing track finding in this heterogeneous environment at a high rate and with substantial beam background, demands specially designed and carefully implemented algorithms.

        This talk will present the algorithms developed for the track finding in the central drift chamber of Belle II in detail. Additionally, first results of the performed pilot runs including tests with cosmic rays will be shown and the performance of the algorithms will be evaluated.

        Speakers: Nils Braun (KIT), Nils Braun (KIT)
      • 15:20
        Belle II Tracking in Phase III with the Full Detector 15m
        Speaker: Felix Metzner (Karlsruhe Institute of Technology)
      • 15:40
        Coffee break 30m
    • 16:10 18:15
      Young Scientist Forum: YSF Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

      Convener: Gordon Watts (University of Washington (US))
      • 16:10
        Clustering with adaptive similarity measure for track reconstruction 15m

        The track reconstruction task of ATLAS and CMS will become computationally increasingly challenging with the LHC high luminosity upgrade. In the context of taking advantage of machine learning techniques, a clustering algorithm is proposed to group together hits that belong to the same particle. Clustering is referred to as unsupervised classification and is widely applied to big data. The unsupervised aspect in clustering allows it to generalize to any track size or properties as there are no defined classes.

        The dataset considered is generated from ACTS fast simulation (A common tracking software) which provides simple and efficient event data modeling.
        The algorithm uses the 3D spatial coordinates to group hits and uses the known detector geometry to exclude incompatible grouping. To efficiently cluster hits together which originate from a common particle, we define an adaptive distance which improves by adding more hits and quantifies how far a hit is from the particle’s current reconstructed trajectory.
        We show that the algorithm is able to adapt and generalize to kinematic range of interest for the tracks.

        Speaker: Sabrina Amrouche (RSA - Universite de Geneve (CH))
      • 16:30
        Implementation and performance of the ATLAS pixel clustering neural networks 15m

        The high particle densities produced by the Large Hadron Collider (LHC) mean that in the ATLAS pixel detector the clusters of deposited charge start to merge. A neural network-based approach is used to estimate the number of particles contributing to each cluster, and to accurately estimate the hit positions even in the presence of multiple particles. This talk or poster will thoroughly describe the algorithm and its implementation as well as present a set of benchmark performance measurements. The problem is most acute in the core of high-momentum jets where the average separation between particles becomes comparable to the detector granularity. This is further complicated by the high number of interactions per bunch crossing. Both these issues will become worse as the Run 3 and HL-LHC programme require analysis of higher and higher pT jets, while the interaction multiplicity rises. Future prospects in the context of LHC Run 3 and the upcoming ATLAS inner detector upgrade will also be discussed.

        Speaker: Louis-Guillaume Gagnon (Universite de Montreal (CA))
      • 16:50
        Splitting Strip Detector Clusters in Dense Environments 15m

        Tracking in high density environments, particularly in high energy jets, plays an important role in many physics analyses at the LHC. In such environments, there is significant degradation of track reconstruction performance. Between runs 1 and 2, ATLAS implemented an algorithm that splits pixel clusters originating from multiple charged particles, using charge information, resulting in the recovery of much of the lost efficiency. However, no attempt was made in prior work to split merged clusters in the Semi Conductor Tracker (SCT), which does not measure charge information. In spite of the lack of charge information in SCT, a cluster-splitting algorithm has been developed in this work. It is based primarily on the difference between the observed cluster width and the expected cluster width, which is derived from track incidence angle. The performance of this algorithm is found to be competitive with the existing pixel cluster splitting based on track information.

        Speaker: Ben Nachman (Lawrence Berkeley National Lab. (US))
      • 17:10
        Tracking in Dense Environments for the HL-LHC ATLAS Detector 15m

        Tracking in dense environments, such as in the cores of high-energy jets, will be key for new physics searches as well as measurements of the Standard Model at the High Luminosity LHC (HL-LHC). The HL-LHC will operate in challenging conditions with large radiation doses and high pile-up (up to $\mu$=200). The current tracking detector will be replaced with a new all-silicon Inner Tracker for the Phase II upgrade of the ATLAS detector. In this talk, characterization of the HL-LHC tracker performance for collimated, high-density charged particles arising from high-momentum decays is presented. In such decays the charged-particle separations are of the order of the tracking detector granularity, leading to challenging reconstruction. The ability of the HL-LHC ATLAS tracker to reconstruct the tracks in such dense environments is discussed and compared to ATLAS Run-2 performance for a variety of relevant physics processes.

        Speaker: Felix Cormier (University of British Columbia (CA))
      • 17:30
        Track Finding in the COMET Experiment using Boosted Decision Trees 15m

        The Coherent Muon to Electron Transition (COMET) experiment is designed to search for muon to electron conversion, a process which has very good sensitivity to Beyond the Standard Model physics. The first phase of the experiment is currently under construction at J-PARC. This phase is designed to probe muon to electron conversion 100 times better than the current limit. The experiment will achieve this sensitivity by directing a high intensity muon beam at a stopping target. The detectors probe the resulting events for the signal 105 MeV electron from muon to electron conversion.

        A boosted decision tree (BDT) algorithm has been developed to find this signal track. This BDT is used to combine energy deposition and timing information with a reweighted inverse hough transform to filter out background hits. The resulting hits are fit using a RANdom SAmple Consensus (RANSAC) fit, which chooses the best fit parameters for an optimized selection of the filtered hits.

        These hits are then passed to the track fitting algorithm. Results show that using a BDT significantly improves in background hit rejection when compared to traditional, cut-based hit rejection methods. At 99% signal hit retention, a cut on the energy deposition is able to remove 65% of background hits. By combining more multiple features, the BDT is able to remove 98% of background hits while still retaining 99% of signal hits.

        Speaker: Ewen Gillies (Imperial College London)
    • 18:30 21:00
      Dinner Banquet 2h 30m UW Club Cascade

      UW Club Cascade

    • 09:00 13:00
      Session5 Physics-Astronomy Auditorium A118

      Physics-Astronomy Auditorium A118

      University of Washington Seattle

      Convener: Maurice Garcia-Sciveres (Lawrence Berkeley National Lab. (US))
      • 09:00
        Quantum Pattern Recognition for High-Luminosity Era 25m

        The data input rates foreseen in High-Luminosity LHC (circa 2026) and High-Energy LHC (2030s) High Energy Physics (HEP) experiments impose new challenging requirements on data processing. Polynomial algorithmic complexity and other limitations of classical approaches to many central HEP problems induce searches for alternative solutions featuring better scalability, higher performance and efficiency. For certain types of problems, the Quantum Computing paradigm can offer such asymmetrical-response solutions. We discuss the potential of quantum pattern recognition in the context of ATLAS data processing. In particular, we examine Quantum Associative Memory (QuAM) – a quantum variant of content-addressable memory based on quantum storage medium and two quantum algorithms for content handling. We examine the limits of storage capacity, as well as store and recall efficiencies, from the viewpoints of state-of-the-art quantum hardware and ATLAS real-time charged track pattern recognition requirements. We present QuAM simulations performed on LIQUi|> - the Microsoft’s Quantum Simulator toolsuite. We also review several difficulties integrating the end-to-end quantum pattern recognition into a real-time production workflow, and discuss possible mitigations.

        Speaker: Illya Shapoval (Lawrence Berkeley National Laboratory)
      • 09:30
        Status and Challenges of Tracker Design for FCC-hh 25m

        A 100 TeV proton collider represents a core aspect of the Future Circular Collider (FCC) study.
        An integral part of this project is the conceptual design of individual detector systems that can be
        operated under luminosities up to 3×10^35 cm^−2 s^−1. One of the key limitations in the design arises from an increased number of pile-up events O(1000), making both particle tracking and identification of vertices extremely challenging. This talk reviews the general ideas that conceptually
        drive the current tracker/vertex detector design for the FCC-hh (proton-proton). These include
        material budget, detector granularity, pattern recognition, primary vertexing/pile-up mitigation
        and occupancy/data rates. Finally, the limits of current tracker technologies and requirements on
        their future progress, i.e. the dedicated R&D, will be briefly discussed.

        Speaker: Zbynek Drasal (CERN)
      • 10:00
        Conformal tracking for the CLIC detector 25m

        Conformal tracking is the novel and comprehensive tracking strategy adopted by the CLICdp Collaboration. It merges the two concepts of conformal mapping and cellular automaton, providing an efficient pattern recognition for prompt and displaced tracks, even in busy environments with 3 TeV CLIC beam-induced backgrounds. In this talk, the effectiveness of the algorithm will be shown by presenting its performances for the CLIC detector, which features a low-mass silicon vertex and tracking system. Moreover, given its geometry-agnostic approach, the algorithm is easily adaptable to other detector designs and interaction regions, resulting in successful performances also for the CLIC detector modified for FCC-ee.

        Speakers: Emilia Leogrande (CERN), Daniel Hynds (University of Glasgow (GB))
      • 10:30
        Coffee break 30m
      • 11:00
        TrickTrack: An experiment-independent, cellular-automaton based track seeding library 25m

        The design of next-generation particle accelerators evolves to higher and higher luminosities, as seen in the HL-LHC upgrade and the plans for the Future Circular Collider (FCC). Writing track reconstruction software that can cope in these high-pileup scenarios is a big challenge, due to the inherent complexity of current algorithmic approaches. In this contribution we present TrickTrack, a track reconstruction toolkit based on the hit-chain maker used for track seeding in the CMS experiment. It aims at solving pattern recognition problems efficiently in a concurrency-friendly implementation, while remaining general enough to be of use in most track detectors. The performance of TrickTrack in the FCC-hh design study is being presented as the first usecase beyond CMS, which features pileup rates of 1000 interactions per bunch crossing and a high-occupancy environment for tracking.

        Speakers: Valentin Volkl (University of Innsbruck (AT)), Felice Pantaleo (CERN)
      • 11:30
        ACTS Status 25m

        Reconstructing charged particles trajectories is a central task in the reconstruction of most particle physics experiments. With increasing intensities and ever increasing track densities this combinatorial problem becomes increasingly challenging. Preserving physics performance in these difficult experimental conditions while at the same keeping the computational cost at a reasonable level, is a challenge for many experiments. A Common Tracking Software (ACTS) is an effort to bring a well-tested tracking software to modern compilers and computing architectures to allow easy computational optimization of existing algorithm as well as simple evaluation of new approaches. Based on the ATLAS tracking software, ACTS aims to provide a clean code base that is optimized for concurrent and vectorized execution. This talk will discuss the basic design decisions, its current status, and the future roadmap.

        Speaker: Moritz Kiehn (Universite de Geneve (CH))
      • 12:00
        HEP.TrkX: Novel deep learning methods for track reconstruction 25m

        For the past year, the HEP.TrkX project has been investigating machine learning solutions to LHC particle track reconstruction problems. A variety of models were studied that drew inspiration from computer vision applications and operated on an image-like representation of tracking detector data. While these approaches have shown some promise, image-based methods face challenges in scaling up to realistic HL-LHC data due to high dimensionality and sparsity. In contrast, models that can operate on the spacepoint representation of track measurements (“hits”) can exploit the structure of the data to solve tasks efficiently.
        In this presentation we will show two sets of new deep learning models for reconstructing tracks using spacepoint data arranged as sequences or connected graphs. In the first set of models, recurrent neural networks (RNNs) are used to extrapolate, build, and evaluate track candidates similar to Kalman Filter algorithms. Such models can express their own uncertainty when trained with an appropriate likelihood loss function. The second set of models use graph neural networks for the tasks of hit classification and segment classification. These models read a graph of connected hits and compute features on the nodes and edges. They adaptively learn which hit connections are important and which are spurious. The models are scaleable with simple architecture and relatively few parameters. Results for all models will be presented on ACTS generic detector simulated data.

        Speaker: Dr Steven Andrew Farrell (Lawrence Berkeley National Lab (US))
      • 12:30
        TrackML Hackathon results 20m
        Speaker: David Rousseau (LAL-Orsay, FR)
      • 12:50
        Final remark 10m
        Speaker: Shih-Chieh Hsu (University of Washington Seattle (US))