Connecting The Dots / Intelligent Trackers 2017

Europe/Zurich
LAL-Orsay

LAL-Orsay

Description

With the parallel progress in pattern recognition algorithms and microelectronic technology, the design and performance of tracking detector is rooted in the solid interplay of hardware and software : sensors, readout and trigger electronics, online and offline reconstruction software. The main focus of the workshop is on pattern recognition and machine learning algorithms devoted to the reconstruction of particle tracks or jets in high energy physics experiments, and the hardware developments that enable them.

This 2017 edition is a merger of the Connecting The Dot series (see CTD2015 Berkeley, CTD2016 Vienna ) with the Workshop on Intelligent Tracker series (see WIT2010 Berkeley, WIT2012 PisaWIT2014 Penn).

The workshop is plenary sessions only,  with a mix of invited talks and accepted contributions.

A 2D tracking hackathon was organized Tuesday afternoon.

Twitting about this workshop is welcome using #ctdwit .The workshop has taken place in Laboratoire de l'Accélérateur Linéaire at Orsay, close to Paris, France. Please check main portal web page for accommodation, directions, social events and full registration.

Inquiries to : ctdwit2017-contact@googlegroups.com

Proceedings (on a voluntary basis) have been  peer-reviewed and published online : EPJ Web of Scienceinspirehep.

The 2018 edition will take place 20-22 March 2017 at U Washington, Seattle, USA.

Registration
Registration for TrackMLRamp hackathon Tuesday afternoon
    • 13:00
      Registration
    • 1
    • 2
      A Multi-Purpose Particle Detector for Space Missions

      Precisely characterizing a radiation environment is essential for space exploration---manned and unmanned missions to, for example, the Moon or Mars---and astroparticle-physics experiments---for example, solar observations and cosmic-ray measurements. Particle detectors used for such endeavors must be compact, use as little power as possible, and withstand the harsh space environment. We are developing the Multi-purpose Active-Target Particle Telescope (MAPT), a detector capable of omnidirectionally measuring particle fluxes with a large geometric acceptance. The detector fits into a cube with an edge length of $10\,$cm and weighs less than $3\,$kg. It is essentially a tracking calorimeter with a segmented active core made of scintillating plastic fibers. Besides tracking charged particles and ions, MAPT can also identify them by analyzing their energy-loss profiles using extended Bragg-curve spectroscopy methods. Anti-ions can be distinguished by the secondary particles created by their annihilation inside the detector. We simultaneously analyze track parameters and particle characteristics using Bayesian inference techniques to minimize the expected uncertainties of particle flux measurements. We tested basic implementations of a particle filter and a maximum-likelihood method and obtained an angular resolution of about $3\,$degrees and an energy resolution of better than $2.5\%$ for $50\,$MeV protons. We present the expected capabilities of the detector, use cases, and first results from beam experiments and discuss the requirements and challenges of on- and off-line data analysis methods.

      Speaker: Mr Thomas Pöschl (Technical University Munich)
    • 3
      4D trackers (space + time information)

      As particle physics strives for increased precision and sensitivity in its measurements, the beam energy, power, and per-bunch luminosity of colliders increase and produce significantly more complex events primarily by way of overlapping collisions (pileup) that test the performance and robustness of our algorithms and analysis.

      One avenue towards mitigating the effects of pileup in tracking detectors is to create sensors with finer and finer segmentation to accurately identify individual particles, and this has seen great success within the field. Next generation colliders, like the HL-LHC (200 pileup) and FCC (1000 pileup with 25ns bunches) where pileup events can occur multiple times per millimeter pose a significant challenge using spatial information alone, since events can overlap in space, and track distributions from near-by vertices can become confused. This leads to degradations in reconstruction performance and physics analysis. A clear way to mitigate these degradations is to use the time-at-closest-approach of tracks to more precisely connect tracks to their true vertex of origin, increasing the quality and amount of information in the event.

      Recent advancements in silicon precision timing detectors indicate that finely pixellated, lightweight planar timing detectors with MIP sensitivity are coming within reach. I will discuss these devices, and demonstrate their uses in terms of mitigation of pileup in "timing layer" configurations, where there is one layer providing the time stamp. This initial discussion will be extended to true 4-dimensional trackers where each coordinate measurement is a point in spacetime, focusing on the algorithmic implications and next steps needed to achieve these devices.

      Speaker: Lindsey Gray (Fermi National Accelerator Lab. (US))
    • 4
      “4D“ Tracking with a Timepix3 detector

      Timepix3 detectors are the latest generetion of hybrid active pixel detectors of the Medipix family. Such detectors consist of an active sensor layer wich is flip-chip bump-bonded the readout ASIC, segmenting the detector into a square matrix of 256 x 256 pixels (pixel pitch 55 µm). Ionizing radiation interacting in the active sensor material creates charge carriers, which drift towards the pixelated electrode, where they are collected. In each pixel, the time of the interaction (time resulution 1.56 ns) and the energy deposition are measured.
      We demonstrate with measured data (120 GeV pions, cosmic muons) how the time information can be used for “4D“ particle tracking, with the three spatial dimensions and the energy losses along the particle trajectory (dE/dx). Since the coordinates in the detector plane are given by pixelation (x,y), the x- and y-resolution is determined by the pixel pitch. The z-coordinate is reconstructed by evaluating the charge carrier drift times (z) with a resolution, experimentally proven to be better than 40 µm for a Timepix3 equipped with a 500 µm thick silicon sensor. Due to the data-driven readout scheme, the track information can be obtained in real-time.

      Speaker: Benedikt Ludwig Bergmann (Czech Technical University (CZ))
    • 5
      Tracking with the ultra fast pixelised Tipsy single photon detector

      Tipsy is an assembly of a pixel chip and a stack of transmission dynodes "tynodes", placed in the vacuum under a classical window+photocathode. A tynode is an array of thin membranes: an electron impinging the upper surface causes the emission of (now) 5.5 secondary electrons at the bottom side. A stack of 5 tynodes causes a cloud of 5.5**5 = 5 k electrons to enter the pixel input pad, sufficient to activate the pixel circuitry.
      Due to the small geometry of the stack and the uniform straight electron paths between the tynodes, the time jitter of electrons entering the pixel is less than a ps.

      A tynode with a transmission secondary electron yield of 5.5 has been realised. A prototype Tipsy is now under construction in the form of a modified Planacon (Photonis).
      New concepts for TimePix chips are being proposed, with a TDC-per-pixel with time resolution better than 10 ps.

      Future low-cost, mass produced Tipsy's may have the following specifications:

      • Thin (4 mm), planar, light, square geometry
      • 10 ps time resolution and 10 um 2D spatial resolution per single soft photon
      • amplification stack free of dark noise
      • absence of ion feedback
      • operates in strong B-fields
      • hit-pixel data rates up to 5 Gb/s

      Tipsy could be well-applied in PET scanners. Instead of a scintillator, a lead glass or saffire cube could be read-out at all six sides, recording (prompt) Cherenkov photons originating from the 511 keV gamma interaction point. By means of GPS algorithms, this point can be reconstructed in 4D with high precision.

      The tracking of MIPs could be done by replacing the photo cathode by an 'e-brane': a foil having a high probability to emit at least one electron at the surface crossing point of the track of the MIP. Another method would be to create Cherenkov photons in thin transparent (mylar) foils. A future inner tracker of a collider experiment would then take the form of a vacuum tank around the interaction point in which foils are stretched in a cylinder geometry, at several radii. The detectors of Cherenkov photons could be placed at the inner side of the vacuum cylinder wall, meters away from the interaction point.

      Speaker: Harry Van Der Graaf (Nikhef National institute for subatomic physics (NL))
    • 16:15
      Coffee break
    • 6
      Wireless data transmission for high energy physics applications

      Over the last years, wireless data transmission technologies have seen tremendous progress to cope with the ever increasing demand on bandwidth, for instance in the mobile sector. Developments on short distance communication are pushing towards frequencies in the mm-band which allow the use of an even higher bandwidth and smaller form factors.

      In high energy physics the demand on bandwidth is increasing rapidly with the development of more and more granular detectors. Especially tracking detectors require readout systems with thousands of links that can transfer several Gbit/s each. At the same time, stringent space, material and power constraints are set on these readout systems.

      The WADAPT project (Wireless Allowing Data and Power Transmission) has been started to study the feasibility of wireless data and power transmission for future tracking detectors. The current focus is set on communication in the 60 GHz band, which offers a high bandwidth, a small form factor and an already mature technology. Tracking detectors can benefit a lot from wireless data transmission. For instance, the material budget of tracking detectors can potentially be minimized and installation can be simplified as the amount of cables and connectors can be reduced. Data transmission topologies that are impossible with wired data links can be realized using wireless communication, allowing for even faster on-detector data processing.

      This talk presents current developments of 60 GHz transceiver chips for HEP applications. Studies of antennas and data transmission will be shown. Studies on crosstalk between wireless links in a reflecting environment have been conducted. Silicon strip and pixel sensors have been operated while being exposed to a 60 GHz data transmission and results of these studies will be presented.

      Speaker: Sebastian Dittmeier (Ruprecht-Karls-Universitaet Heidelberg (DE))
    • 7
      Potential of Monolithic CMOS pixel detectors for future track triggers

      Monolithic pixel sensors based on commercial CMOS processes offer many features which are important for track trigger applications. Most relevant are the smaller pixel sizes at reduced material and the lower production costs. Industrially produced monolithic pixel sensors are significantly cheaper than standard semiconductor trackers, thus allowing to instrument large areas of tracking detectors with highly granular pixel sensors.
      I will discuss the main requirements for track triggers at (future) hadron colliders and explain how these requirements are fulfilled by monolithic pixel sensors. An overview over
      current hardware activities is given. First simulation results using a design based on large area pixel sensors are presented.

      Speaker: Andre Schoening (Ruprecht-Karls-Universitaet Heidelberg (DE))
    • 8
      Young Scientist Forum : High precision timing with HVCMOS MAPS Censors

      There is an increasing demand for precision time measurement in particle physics experiments, to reject pile-up as efficiently as possible. This translates into projects of precision timing tracker/preshower detectors at LHC experiments, in the frame of high luminosity upgrades (Phase 2 HL-LHC). There is little doubt that these techniques, if they can be used successfully at HL-LHC, will enter the arsenal of standard instrumentation for high occupancy environments like the one expected at FCC-hh.
      Operating the LHC with more than 200 collisions per crossing of protons bunches, as it is foreseen from 2025 onward (HL-LHC phase), will greatly complicate the analysis, particularly in the forward regions, where it will be very difficult to link the tracks with the primary vertex (associated with the only interesting collision) and thus to prevent the formation of fake jets created by the random stacking (or pile-up) of tracks from several secondary vertices. A solution proposed to fight against this pile-up effect in the forward regions (pseudo-rapidity greater than 2.4) is to measure very accurately the time of arrival of the particles just before they enter the endcap calorimeter. This allows to reject by the time of flight technique the tracks associated with secondary vertices spread longitudinally over a distance of about 20 cm at the centre of the detector. A time resolution of a few tens of ps with a spatial granularity better than a few mm will be needed to obtain a fake jet rejection rate that is acceptable for physics analyses. Such performances should be obtained with a detector capable of withstanding the very important radiation levels (similar to the levels the outermost layers of the tracker will have to survive) expected in this part of the detector.
      The main characteristic of the HV-HR CMOS monolithic technology is that it enables the integration in a single IC of the detection part (polarized diode) together with the needed stages of the front-end electronics for signal treatment and data sparsification. HVCMOS silicon sensors allow the application of a very strong electrical field in the depleted zone of the charge collection diode. This has several advantages. It opens the possibility to have a very large depleted region, which enables to reach a detection efficiency very close to 100% for a single particle, with good signal to noise ratio, enabling good time measurements without in-situ amplification of the deposited charge. Another advantage of HVCMOS technologies is its low cost, since these technologies are widely used in the automotive industry, where the cost has been pushed down by the high production volumes. This also implies that going from a small demonstrator to a full scale detector should in principle be easy, once the demonstrator performance is validated.
      Simulation studies based on the HV-HR CMOS LFOUNDRY 150 nm technology design kit have shown that a resolution of the order of 50 to 80 ps per MIP impact point can in principle be reached for MAPS pixels sensor of 1 mm pitch. We will present here the simulation results of the performance, and the architecture of a demonstrator chip, featuring a few rows and columns of individual pixels. Each pixel is equipped with a fast preamplifier, discriminator, and ancillary electronics to identify the position of the pixel hit, DACs to configure the discriminator, plus an injection chain to calibrate in-situ the performance of the readout electronics. At the time of the workshop, the chip will hopefully be in final submission stages.

      Speaker: Mr Mohamed Lachkar (CEA-Irfu, Saclay)
    • 9
      Young Scientist Forum : Lossless data compression for the HL-LHC silicon pixel detector readout

      The readout rates required for the next generation pixel detectors to be used in the ATLAS and CMS experiments at the High-Luminosity LHC (HL-LHC) will be about 3 Gb/s/cm2. The very high radiation levels and small space available makes it impossible to envisage optical conversion on chip, which should be done using electrical links, implying that a significant material contribution will be associated with data transport out of the detector volume. In this contribution we will present the results of an implementation of a lossless arithmetic data compression that achieves an efficient bandwidth reduction. We will discuss the performances of the algorithm and its integration in the data output.

      Speaker: Stamatios Poulios (Universita di Pisa & INFN (IT))
    • 10
      Young Scientist Forum : Functional nonparametric regression for track reconstruction

      This paper describes the reconstruction of trajectories form charged particles (tracks) inside a generic tracker considering tracks as functional data and including particle properties. First the clusters are broadly grouped in regions of phase space to break down the pattern recognition problem into groups of tracks which point into similar directions. Curves are then interpolated from discrete cluster measurements, and each curve is expressed as a function of the particle trajectory within the detector, its cluster shape and charge. Functional data analysis modeling, in particular Functional Principal Components Analysis, is then applied with a functional predictor (representing the particle features) and functional responses (the tracks). Finally, two types of regressions are performed on the functional responses, using linear models and more complex Support Vector Regression for the track reconstruction. The reconstruction is tested in high pileup environment as predicted for the HL-LHC and FCC-hh scenarios.

      Speaker: Cherifa Sabrina Amrouche (Universite de Geneve (CH))
    • 19:00
      Wine and cheese at LAL
    • 11
      The design and simulated performance of a fast Level 1 track trigger for the ATLAS High Luminosity Upgrade

      The ATLAS experiment at the high-luminosity LHC will face a five-fold increase in the number of interactions per collision relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper trigger turn-on curves can be achieved,and b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, custom electronic device integrated in the hardware-based first trigger level of the experiment, with repercussions propagating as far as the detector read-out philosophy. This talk will discuss the requirements, architecture and projected performance of the system in terms of tracking, timing and physics, based on detailed simulations. Studies are carried out comparing two detector geometries and using data from the strip subsystem only or both strip and pixel subsystems.

      Speaker: Mikael Martensson (Uppsala University (SE))
    • 12
      An FPGA based track finder at Level 1 for CMS at the High Luminosity LHC

      "A new tracking detector is under development for the Compact Muon Solenoid (CMS) experiment at the High-Luminosity LHC (HL-LHC). It includes an outer tracker that will construct stubs, built from clusters reconstructed in two closely-spaced layers, for the rejection of hits from low transverse momentum tracks and transmit them off-detector at 40MHz. If tracker data is to contribute to keeping the Level-1 trigger rate at around 750 kHz under increased luminosity, a crucial component of the upgrade will be the ability to identify tracks with transverse momentum above 3 GeV/c by building tracks out of stubs. A concept for an FPGA-based track finder using a fully time-multiplexed architecture is presented, where track candidates are identified using a projective binning algorithm based on the Hough Transform. A complete hardware demonstrator based on the MP7 processing board has been assembled to prove the entire system from the input to the tracker readout boards to producing tracks with fitted helix parameters. This has been achieved within the latency constraints with existing technology in 1/8th of the tracker solid angle at up to 200 proton-proton interactions per event. The track reconstruction system demonstrated, the architecture chosen, the achievements to date and future options for such a system will be discussed."

      Speaker: Alexander Morton (Brunel University (GB))
    • 13
      L1 track trigger for the CMS HL-LHC upgrade using AM chips + FPGA

      The increase of luminosity at HL-LHC will require the introduction of tracker information at Level-1 trigger system in CMS to maintain an acceptable trigger rate to select interesting events despite the one order of magnitude increase in the minimum bias interactions. To extract in the required latency the track information a dedicated hardware has to be used. We present the tests of a prototype system (Pattern Recognition Mezzanine) as core of pattern recognition and track fitting for the CMS experiment, combining the power of both Associative Memory custom ASIC and modern Field Programmable Gate Array (FPGA) devices. The mezzanine uses the latest available associative memory devices (AM06) and the most modern Xilinx Ultrascale FPGA. The results of the test for a complete tower comprising about 0.5 Million patterns will be presented using as input simulated events in the upgraded CMS detector. We will show the performances of the pattern matching, track finding and track fitting, along with the latency and processing time needed.

      Speaker: Giacomo Fedi (Universita di Pisa & INFN (IT))
    • 14
      L1 Tracking at CMS For the HL-LHC using the Tracklet approach

      The High Luminosity LHC (HL-LHC) is expected to deliver luminosities of $5\times10ˆ{34} ~\mathrm{cm}ˆ2/\mathrm{s}$, with an average of about 140 overlapping proton-proton collisions per bunch crossing. These extreme pileup conditions place stringent requirements on the trigger system to be able to cope with the resulting event rates. A key component of the CMS upgrade for HL-LHC is a track trigger system which would identify tracks with transverse momentum above 2 GeV already at the first-level trigger. This talk presents a proposal for implementing the L1 tracking using tracklets for seeding.

      Results from a recently completed demonstrator project will be presented, which shows good performance and ability to reconstruct tracks within $4\mu s$ of the collision, and projections for ultimate system performance.

      Speaker: Margaret Zientek (Cornell University (US))
    • 11:00
      Coffee break (group photo)
    • 15
      Improved AM chip pattern recognition with optimized ternary bit usage

      For the ATLAS Fast TracKer (FTK) hardware-based track reconstruction
      system, the AM chip is used in the pattern recognition step. The
      version of the AM chip used in the FTK is based on eight associative
      memory cells per pattern, corresponding to eight detector planes. Patterns
      are identified for addresses where seven out of the eight memory cells
      indicate a matching hit. The associative memories each accept 15-bit
      hit addresses for the look-up. Three of these bits are ternary, such
      that the associative memory can be programmed at each address to
      either match the bit exactly (B=0 or B=1) or to always match
      (B=X). The use of ternary bits gives the possibility to tune the
      resolution of the pattern match for each layer of each pattern
      independetly, thus achieving better signal to noise in the pattern
      match. In this talk, a fast and efficient method to optimize the use
      of ternary bits is presented. It is based on limiting the number of
      bits which have state X while the desired patterns are imported to the
      AM chip. With respect to previously used methods improved data flow is
      achieved at constant efficiency.

      Speaker: Stefan Schmitt (Deutsches Elektronen-Synchrotron (DE))
    • 16
      Young Scientist Forum : Fast and reliable Tracking for the High-Level-Trigger at Belle II

      The Belle II detector is currently being built in Tsukuba, Japan and
      will record $e^+e^-$ collision events at a record-breaking instantaneous
      luminosity of $8\cdot 10^{35} \ \mathrm{cm^{-2}s^{-1}}$ which is delivered
      by the SuperKEKB collider. Such a large luminosity is required to significantly
      improve the precision on measurements of $B$, $D$ and $\tau$ decays to probe
      for signs of physics beyond the standard model.

      Luminosity- and beam-size dependent background processes make up a large part of
      the measured events and an early software-based trigger, which combines the
      measurements of the individual sub-detectors, is required to filter the data and only store
      events which are of interest to physics analyses.

      Tracking, especially in the central drift chamber (CDC),
      plays a major role here, as it can provide variables needed for a correct event
      classification in a fast and reliable way. This talk will present the planned
      High-Level-Trigger scheme for Belle II and especially the chosen tracking concepts
      to reach the required speed and reliability as well as some results on
      the runtime and the efficiency of the algorithm.

      Speaker: Nils Braun (KIT - Karlsruher Institute of Technology (DE))
    • 17
      Young Scientist Forum : Online Track Reconstruction and Data Reduction for the Belle II Experiment using DATCON

      The new Belle II experiment at the asymmetric $e^+e^-$ accelerator SuperKEKB at KEK in Japan is designed to deliver a highest instantaneous luminosity of $8 \times 10^{35} \text{cm}^{-2} \text{s}^{-1}$. To perform high-precision track reconstruction, e.g. for measurements of time dependent CPV decays and secondary vertices, the Belle II detector is equipped with a DEPFET pixel detector (PXD) of high granularity, containing 8 million pixels in total. The high instantaneous luminosity and short bunch crossing times produce a large stream of online data in the PXD, which needs to be reduced significantly for offline storage. This is done using an FPGA-based Data Acquisition Tracking and Concentrator Online Node (DATCON), which uses information from the Belle II strip vertex detector (SVD) surrounding the PXD to carry out online track reconstruction, extrapolation back to the PXD, and to define Regions of Interest (ROI) on the PXD. This reduces the data stream approximately by a factor of ten with an ROI finding efficiency of >90% of PXD physics hits inside the ROI.

      In this talk, I will present the current status of the FPGA-based implementation of the track reconstruction using the Hough transformation and the offline simulation.

      Speaker: Christian Wessel (University of Bonn)
    • 18
      Young Scientist Forum : Online track reconstruction using Kalman Filters on FPGAs

      The significant instantaneous luminosity planned at the High-Luminosity LHC will present a challenging environment for online track reconstruction.
      Hardware acceleration of tracking algorithms on parallel architectures is an attractive solution to meeting latency restrictions in online systems.
      Here we present an FPGA implementation of the Kalman Filter for fitting and cleaning tracks.
      The implementation has been developed targeting a Xilinx Virtex 7 FPGA.
      A High Level Synthesis language, MaxJ, was used to simplify the implementation of the algorithm compared to conventional FPGA programming techniques.
      A single iteration latency of 210 ns is achieved at a clock frequency of 240 MHz.
      Due to the small resource usage of the matrix maths, 36 independent Kalman Filter nodes operate in the chip.
      Operation pipelining enables the processing of multiple data simultaneously within a node.
      The implementation has a theoretical upper limit of 1 billion track fits per second for a 6 layer tracker.
      At the data rate observed in high pile-up Monte Carlo data, with a preceding track finding stage, we fit 23 million tracks per second, with spare capacity.
      Here we present the algorithm, its performance and applications, including at different levels of online reconstruction.

      Speaker: Sioni Paris Summers (Imperial College (GB))
    • 19
      Optimal use of charge information for HL-LHC pixel readout

      Due to an enormous collision rate, charge information from particles traversing the innermost layers of the upgraded ATLAS and CMS detectors will be discretized using the time over threshold (ToT) method. Clever data manipulation, compression, or augmentation schemes can make a significant impact on downstream pattern recognition algorithms. In the context of the high-luminosity LHC (HL-LHC) pixel readout chip design, we systematically study the impact of various schemes on single and multi-particle cluster resolution, efficiency, classification, and particle identification. We show that with limited charge information, one can provide nearly optimal input to the pattern recognition for each of these tasks. This work provides an important input to the design of the next generation pixel chips that must cope with extreme rates (GHz/cm$^2$), data volumes (1 Gbps/cm$^2$), and radiation damage (1 GRad) at the HL-LHC.

      Speaker: Ben Nachman (Lawrence Berkeley National Lab. (US))
    • 13:15
      Lunch break
    • 20
      Abstraction in scientific data visualization: application to brain connectivity and structural biology

      Scientific simulations or data acquisition processes often result in large amounts of data samples that need to be "connected" to allow people to understand the information/meaning hidden within. However, if we simply connect all related data points we may end up with an even larger dataset that is more difficult to understand. I will thus talk about illustrative forms of visualization that are inspired by a long tradition of hand-made illustration. In particular, I will talk about the concept of abstraction, using examples from brain connectivity visualization and from structural biology. I will talk about different forms of abstraction, photometric and geometric abstraction, and show how they can be used to create meaningful illustrative visualizations of scientific datasets.

      Speaker: Dr Tobias Isenberg (INRIA-Saclay)
    • 21
      Robust classification of particle tracks for characterization of diffusion and dynamics in fluorescence microscopy

      The characterization of molecule dynamics in living cells is of paramount interest in quantitative microscopy. This challenge is usually addressed in fluorescent video-microscopy from particle trajectories computed by tracking algorithms. However, classifying individual trajectories into three diffusion groups – subdiffusion, free diffusion (or Brownian motion) and superdiffusion – is a difficult task. To overcome this problem, we have developed a two stage approach based on statistical measure of diffusion and requiring the setting of only one parameter corresponding to a p-value. In the first stage, the procedure is related to a statistical test with the Brownian motion as the null hypothesis and the subdiffusion and super diffusion as the alternative hypothesis. The testing procedure is well grounded in statistics, robust to different lengths of trajectories and low signal-to-noise ratios. However, it is known that applying multiple times a test without care leads to a high number of false positives.
      Accordingly, in the second stage, we modified the results of the first stage to address this problem. We considered the multiple testing framework to reduce the number of trajectories wrongly classified as superdiffusion or sub diffusion. This approach has been especially developed to process individual trajectories provided by particle tracking algorithms and 2D+t and 3D+t images acquired with standard microscopy methods such as wide-field or confocal microscopy or with super-resolution microscopy such as (Single Particle Tracking) SPT-PALM. We demonstrate that the proposed approach is more robust than previous techniques, including the Mean Square Displacement (MSD) method.

      Speaker: Dr Charles Kervrann (INRIA, Centre Rennes - Bretagne Atlantique)
    • 22
      Status of Track Machine Learning challenge and introduction to the TrackMLRamp hackathon

      Tracking at the HL-LHC in ATLAS and CMS will be very challenging. In particular, the pattern recognition will be very resource hungry, as extrapolated from current conditions. There is a huge on-going effort to optimise the current software. In parallel, completely different approaches should be explored.
      To reach out to Computer Science specialists, a Tracking Machine Learning challenge (trackML) is being set up, building on the experience of the successful Higgs Machine Learning challenge in 2014 (which associated ATLAS and CMS physicists with Computer Scientists). A few relevant points:
      • A dataset consisting of a simulation of a typical full Silicon LHC experiments has been created, listing for each event the measured 3D points, and the list of 3D points associated to a true track. The data set is large to allow the training of data hungry Machine Learning methods : the orders of magnitude are : one million event, 10 billion tracks, 1 terabyte. Typical CPU time spent by traditional algorithms is 100s per event. 

      • The participants to the challenge should find the tracks in an additional test dataset, meaning building the list of 3D points belonging to each track (deriving the track parameters is not the topic of the challenge) 

      • A figure of merit should be defined which combines the CPU time, the efficiency and the fake rate (with an emphasis on CPU time) 

      • The challenge platforms should allow measuring the figure of merit and to rate the different algorithms submitted. 

      The emphasis is to expose innovative approaches, rather than hyper-optimising known approaches. Machine Learning specialists have showed a deep interest to participate to the challenge, with new approaches like Convolutional Neural Network, Deep Neural Net, Monte Carlo Tree Search and others. 

      A slimmed down 2D version of the challenge will be proposed right after this talk, and will be introduced here.

      Speaker: David Rousseau (LAL-Orsay, FR)
    • 16:00
      Coffee break
    • 23
      TrackMLRamp hackathon : a 2D tracking challenge

      To participate to this hackathon, it is necessary to register using the "hackathon registration" button on the left hand side menu.

      See the motivation for an HL-LHC tracking challenge in the previous talk abstract. A RAMP (Rapid Analysis and Model Prototyping) on the Paris-Saclay platform is a mini-challenge/hackathon. Simulated LHC like events in 2D will be proposed, simplified but not too simple: circular detectors with Si-like resolution, uniform magnetic field, multiple scattering. the name of the game will be to propose new approaches to associate hits into tracks. Code submission (in python only) will be evaluated online and be ranked on a leaderboard, based on the best efficiency with limited fake rate. Participants will be able to download the best contributions and improve on them.
      Set up instructions are provided.
      Winners as determined by the leaderboard Thursday 9AM have been congratulated at 11AM Thursday (see agenda).
      However the hackathon site remains open for the foreseeable future.

      Speakers: Balázs Kégl (Linear Accelerator Laboratory), David Rousseau (LAL-Orsay, FR), Isabelle Guyon, Mikhail Hushchyn (Yandex School of Data Analysis (RU)), Yetkin Yilmaz (Laboratoire Leprince-Ringuet, France)
    • 24
      Performance of the ATLAS Tracking and Vertexing in the LHC Run-2 and Beyond

      Run-2 of the LHC has provided new challenges to track and vertex reconstruction with higher centre-of-mass energies and luminosity leading to increasingly high-multiplicity environments, boosted, and highly-collimated physics objects. In addition, the Insertable B-layer (IBL) is a fourth pixel layer, which has been inserted at the centre of ATLAS during the shutdown of the LHC. We will present results showing the performance of the track and vertex reconstruction algorithms using Run-2 data at the LHC and highlight recent improvements. These include a factor of three reduction in the reconstruction time, optimisation for the expected conditions, novel techniques to enhance the performance in dense jet cores, time-dependent alignment of sub-detectors and special reconstruction of charged particle produced at large distance from interaction points. Moreover, data-driven methods to evaluate vertex resolution, fake rates, track reconstruction inefficiencies in dense environments, and track parameter resolution and biases will be shown. Luminosity increases in 2017 and beyond will also provide challenges for the detector systems and offline reconstruction, and strategies for mitigating the effects of increasing occupancy will be discussed.

      Speaker: Alejandro Alonso Diaz (University of Copenhagen (DK))
    • 25
      Expected Performance of the ATLAS Inner Tracker at the High-Luminosity LHC

      The large data samples at the High-Luminosity LHC will enable precise measurements of the Higgs boson and other Standard Model particles, as well as searches for new phenomena such as supersymmetry and extra dimensions. To cope with the experimental challenges presented by the HL-LHC such as large radiation doses and high pileup, the current Inner Detector will be replaced with a new all-silicon Inner Tracker for the Phase II upgrade of the ATLAS detector. The current tracking performance of two candidate Inner Tracker layouts with an increased tracking acceptance (compared to the current Inner Detector) of |η|<4.0, employing either an ‘Extended’ or ‘Inclined’ Pixel barrel, is evaluated. New pattern recognition approaches facilitated by the detector designs are discussed, and ongoing work in optimising the track reconstruction for the new layouts and experimental conditions are outlined. Finally, future approaches that may improve the physics and/or technical performance of the ATLAS track reconstruction for HL-LHC are considered.

      Speaker: Nora Emilia Pettersson (University of Massachusetts (US))
    • 26
      Expected Performance of tracking at HL-LHC CMS

      After 2020, CERN is planning an upgrade program of the LHC collider (HL-LHC) which will bring the luminosity up to 5x10^{34} cm^{−2}s^{−1}, almost five times the one foreseen for 2017, meaning a mean of more than 140 inelastic collisions superimposed on the event of interest. In this high-occupancy environment, reconstructing the particle momentum with high precision is one of the biggest challenges. In order to face this new scenario (called Phase 2), the Compact Muon Solenoid (CMS) experiment will build a completely new silicon-tracker detector and will need to implement new approaches to track finding to exploit the capabilities of the new tracker in addition to the algorithms already in use. In this talk the expected performance of CMS tracking at HL-LHC will be presented.

      Speaker: Erica Brondolin (Austrian Academy of Sciences (AT))
    • 27
      Track reconstruction for the Mu3e experiment based on a novel Multiple Scattering fit

      The Mu3e experiment is designed to search for the lepton flavour
      violating decay $\mu^+ \rightarrow e^+e^-e^+$.
      The aim of the experiment is to reach a branching ratio sensitivity of $10^{-16}$.
      At first phase the experiment will be performed at existing beam line
      providing $10^8$ muons per second at the Paul-Scherrer Institute (Switzerland)
      which will allow to reach sensitivity of $10^{-15}$.
      The muons with a momentum of about 28 MeV/c are stopped and decay at
      rest on a target.
      The decay products (positrons and electrons) with energies below 53 MeV
      are measured by a tracking detector consisting of two double layers of
      50 $\mu$m thin silicon pixel sensors.
      The high granularity of pixel detector with a pixel size of $80\times80$ $\mu$m
      allows for a precise track reconstruction in the high occupancy environment of
      the Mu3e experiment reaching 100 tracks per reconstruction frame of
      50 ns in the final phase of experiment.
      The Mu3e track reconstruction uses a novel fit algorithm that in
      the simplest case takes into account only the multiple scattering, which allows
      fast online tracking on a GPU based filter farm.
      The implementation of the 3-dimensional multiple scattering fit based on hit
      triplets is described.
      The extension of the fit that takes into account energy losses and pixel size
      is used for offline track reconstruction.
      The algorithm and performance of the offline track reconstruction based on
      a full Geant4 simulation of the Mu3e detector are presented.

      Speaker: Dr Alexandr Kozlinskiy (Kernphysik Institut, JGU Mainz)
    • 28
      Announcement
      Speaker: David Rousseau (LAL-Orsay, FR)
    • 11:05
      Coffee break
    • 29
      Parameterization-based tracking for the P2 experiment.

      The P2 experiment in Mainz (Germany) aims to determine the weak mixing angle at low momentum transfer with an unprecedented precision. The approach of P2 is to measure the parity‑violating asymmetry of elastic electron-proton scattering, from which the weak charge of the proton, and so the weak mixing angle can be evaluated.

      In P2, an electron beam (150 µA, 155 MeV/c$^2$) of alternating polarization will be scattered on a 60 cm long liquid H$_{2}$ target, and the total current of the elastically scattered electrons will be measured with a fused silica Cherenkov detector. A tracking system is necessary to measure the momentum transfer distribution of the elastically scattered electrons, to validate the simulated acceptance of the Cherenkov detector, and to get a control over other systematic uncertainties. Although the tracking system is not required to work at the full beam rate, every attempt is made to achieve the highest possible rate capability.

      The tracking system will consist of 4 layers of high-voltage monolithic active pixel sensors (HV‑MAPS), with 80 x 80 µm$^2$ pixel size, time resolution about 10 ns, and rate capability about 30 MHz/cm$^2$ (DAQ-limited).

      At the full beam rate every reconstruction frame (45 ns long) will contain around 800 signal tracks, and 16 000 background hits from Bremsstrahlung photons. In order to cope with the extreme combinatorial background on-line, a parameterization-based tracking is developed.

      Performance evaluations on simulated data show that the parameterization-based tracking requires to fit not more than 2 track candidates per one signal track at the full beam rate (whereas a simple implementation of track following requires to fit more than 100 candidates per signal track already at 2% of the full beam intensity). Moreover, the parameterization-based approach enables to replace the computation-heavy rigorous fit in the inhomogeneous field by an evaluation of a 3‑rd order polynomial in a 3‑dimensional space, with a negligible loss in accuracy.

      The implementation, as well as the performance evaluations of the parameterization-based tracking will be presented.

      Speaker: Dr Iurii Sorokin (PRISMA Cluster of Excellence and Institute of Nuclear Physics, Johannes Gutenberg University, Mainz, Germany)
    • 30
      Status of study on tracker and tracking at CEPC

      The Conceptual Design Report study on a high energy Circular Electron Positron Collider as a Higgs and/or Z factory is in progress and the tracker research is one of important parts. Based on the design from the ILD using TPC, the study group study the performances on the flavor tag varied with the parameters of the tracker to optimize the tracker design. As parallel, a preliminary design of full-silicon-based tracker is also considered at CEPC. And the study on the tracking algorithms and their performances under a digitization with multi-hits are in progress to improve the tracking performance.

      Speaker: Dr Chengdong Fu (CEPC)
    • 31
      Young Scientist Forum : Online Track and Vertex Reconstruction on GPUs for the Mu3e Experiment

      The Mu3e experiment searches for the lepton flavour violating decay $\mu^+ \rightarrow e^+e^-e^+$,
      aiming at a branching ratio sensitivity better than $10^{-16}$. To reach this
      sensitivity, muon rates above $10^9 \mu/s$ are required. A high precision silicon tracking detector combined with excellent timing resolution from
      scintillating fibers and tiles will measure the momenta, vertices and timing
      of the decay products of muons stopped in the target to suppress background.

      During the first phase of the experiment, a muon rate of $10^8 \mu/s$ will be
      available, resulting in a rate of $\sim$10 GB/s of zero-suppressed
      data. The trigger-less readout system consists of optical links and switching
      FPGAs sending the complete
      detector data for a time slice to one node of the filter farm.
      Since we can only store $\sim$ 100 MB/s of data, a full online reconstruction is necessary for an event selection. This is the ideal situation to
      make use of the highly parallel structure of graphics
      processing units (GPUs).
      An FPGA inside the filter farm PC therefore
      transfers the event data to the main memory of the PC and then to GPU memory via PCIe direct memory access. The GPU
      finds and fits tracks using a non-iterative 3D tracking algorithm for multiple scattering
      dominated resolution. For three hits from subsequent detector planes, a helix
      is fitted by assuming that multiple scattering at the middle hit is the only
      source of uncertainty.
      In a second step, a three track vertex selection is performed by calculating the
      vertex position from the intersections of the tracks in the plane
      perpendicular to the beam axis and weighting them by the uncertainties from
      multiple scattering and pixel pitch.
      Together with kinematic cuts this allows for a reduction
      of the output data rate to below 100 MB/s by removing combinatorial background.

      The talk will focus on the implementation of the track fit and vertex selection on the GPU and performance studies will be presented.

      Speaker: Dorothea vom Bruch (Mainz University)
    • 32
      Young Scientist Forum : Comparison of pattern recognition methods for the SHiP Spectrometer Tracker

      SHiP is a new general purpose fixed target facility proposed at the CERN SPS accelerator to search for particles predicted by Hidden Portals. The SHiP detector consists of a spectrometer located downstream of a large decay volume. It contains a tracker whose purpose is to reconstruct charged particles from the decay of neutral New Physics objects with high efficiency, while rejecting background events. In this talk we will demonstrate how different track pattern recognition methods can be applied to the SHiP detector geometry. We will compare their reconstruction efficiency, ghost rate and accuracy of the track momentum reconstruction. Limitations of the methods based on the 2D projection of a track will be discussed and the approach of the pattern recognition directly in 3D will be presented. It will be shown how a track reconstruction efficiency above 99% can be achieved.

      Speaker: Mikhail Hushchyn (Yandex School of Data Analysis (RU))
    • 13:00
      Lunch break
    • 33
      New Track Seeding Techniques at the CMS Experiment

      Starting from 2019 the Large Hadron Collider will undergo upgrades in order to increase its luminosity.

      Many of the algorithms executed during track reconstruction scale linearly with the pileup. Others, like seeding, due to the increasing combinatorics, will dominate the execution time, due to their factorial complexity with respect to the pileup.

      We will show the results of the effort in reducing the effect of pile-up in CMS Tracking by

      • exploiting new information coming from an upgraded tracker detector: Vector Hits

      • redesigning the seeding with novel algorithms which are intrinsically parallel

      • executing these new algorithms on massively parallel architectures.

      Speaker: Mr Felice Pantaleo (CERN - Universität Hamburg)
    • 34
      Fast and precise parametrization for extrapolation through a magnetic field

      Extrapolation of trajectories through a magnetic field is needed at various stages of pattern recognition or track fitting procedures, with more or less precision. The Runge-Kutta method implies many calls to a field function and it is generally time consuming. In practice the trajectories may be split in steps between a few predefined surfaces, with possible additional short segments to match the actual measurements. On the other hand, in a collider, the particles used for physics analysis have a small impact parameter with respect to the origin; so, when crossing a given surface, they cover a small part of the phase space describing the local state. As a result, the extrapolation to another surface may be expanded as a polynomial function of the initial parameters within convenient ranges; these ranges and the degrees of the expansion may be tuned to find the best compromise between the precision and the fraction of particles within the domain of validity. An example of precomputed tables of coefficients is given for a long range extrapolation in a detector covering the forward region of a collider, inspired by the LHCb configuration.

      Speaker: Pierre Billoir (Laboratoire de Physique Nucléaire et Hautes Energies (LPNHE))
    • 35
      Combination of various data analysis techniques for efficient track reconstruction in very high multiplicity events

      Present data taking conditions and further
      upgrades of high energy particle colliders, as well as detector systems, call for new ideas. A novel combination of
      established data analysis techniques for charged-particle reconstruction is
      proposed. It uses all information available in a collision event while keeping
      competing choices open as long as possible.

      Suitable track candidates are selected by transforming measured hits to a
      binned, three- or four-dimensional, track parameter space. It is accomplished
      by the use of templates taking advantage of the translational and rotational
      symmetries of the detectors. Track candidates and their corresponding hits
      usually form a highly connected network, a bipartite graph, where we allow for
      multiple hit to track assignments. In order to get a manageable problem, the
      graph is cut into very many subgraphs by removing a few of its vulnerable
      components, edges and nodes ($k$-connectivity). Finally the hits of a subgraph
      are distributed among the track candidates by exploring a deterministic
      single-player game tree. A depth-limited search is performed with a sliding
      horizon maximizing the number of hits on tracks, and minimizing the sum of track-fit
      $\chi^2$.

      Simplified but realistic models of LHC silicon trackers including the relevant
      physics processes are used to test and study the performance (efficiency,
      purity, timing, paralellisation) of the proposed method in the case of numerous simultaneous
      proton-proton collisions (high pileup), and for single ion-ion collisions at
      the highest available energies.

      Speaker: Ferenc Siklér (Wigner RCP, Budapest (HU))
    • 36
      Young Scientist Forum : Bivariate normal distribution for finding inliers in Hough space for a Time Projection Chamber

      A Time Projection Chamber (TPC) is foreseen as the main tracking detector for the International Large Detector (ILD) one of the two detectors for the next candidate collider named International Linear Collider (ILC).

      GridPix, which is a combination of micro-pattern gaseous detector with a pixelised readout system, is one of the candidate readout systems for the TPC [1]. One of the challenges in the track reconstruction is the large numbers of individual hits along the track (around 100 per cm). Due to the small pixel size of 55 x 55 $\mu m^2$, the hits are not consecutive. This leads to the challenge of assigning the individual hits to the correct track. Hits within a given distance from a reconstructed track are called inliers. Consequently, finding inliers within the many hits and noise is difficult for pattern recognition and this difficulty is increased by diffusion effects in the TPC.

      One of the current algorithms which is utilized for track finding is the Hough transform. Using bivariate normal distribution based on the covariance matrix calculated from the diffusion defects improves collecting inliers in the Hough space directly [2].

      References

      [1] Michael Lupberger.
      The Pixel-TPC:
      A feasibility study
      .PhD thesis in Physik
      Bonn University, Germany, Bonn, August, 2015.

      [2] Leandro A.F.Fernandes, Manuel M .Oliveira.
      Real -time line detection through an improved Hough transform voting scheme.Universidade Federal do Rio Grande do Sul-UFRGS, Instituto de Informática-PPGC, CP 15064 CEP 91501-970 Porto Alegre, RS, Brazil, Germany, Received 2 September 2006; received in revised form 6 March 2007; accepted 13 April 2007.

      Speaker: Mr Amir Noori Shirazi (Siegen University)
    • 16:15
      Coffee break
    • 37
      The track finding algorithm of the Belle II vertex detectors

      Belle II is a multipurpose detector which will be operated at the asymmetric B-Factory SuperKEKB (Japan). The unprecedented instantaneous luminosity of up to $8\times 10 ^{35} \text{cm}^{-2} \text{s}^{-1}$ provided by the accelerator together with the level 1 trigger rate in the range of $30 \text{kHz}$ will pose extreme requirements on the sub-detectors of Belle II and the track finding algorithms. Track position information close to the interaction point is provided by the vertex detector (VXD) which consists of 2 layers of pixel detectors (PXD) and 4 layers of double sided silicon strip vertex detectors (SVD).

      The track finding code for the VXD of Belle II implements in an efficient way the Sector Map concept originally proposed by Rudolf Frühwirth. The typical event recorded by the VXD will be dominated by random hits produced by beam background ( order of 500 pixels hit per event per sensor on the inner layer of the PXD, 20 GByte/s required to readout the whole PXD. ) In this harsh environment the pattern recognition algorithm for the VXD has to be capable to efficiently and quickly recognize the 11 tracks of a typical Y4S event.

      The track finding algorithm I will present will be used both for the final reconstruction of the event and at the High Level Trigger stage for the definition of the Regions Of Interest on the PXD sensors used to reduce the PXD data stream to a manageable level. This latter task put further constraints on the reliability and time consumption of the track finding algorithm. I will present the main concepts of the algorithm, some details of its implementation together with its current performances.

      Speaker: THOMAS LUECK (University of Pisa)
    • 38
      Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs

      For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization.
      One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem in the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offline.
      Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. We continue to make progress toward the understanding of these processors while progressively introducing more realistic physics. These processors, in particular Xeon Phi, provided a good foundation for porting these algorithms to NVIDIA GPUs, for which parallelization and vectorization is of utmost importance. The challenge lies mostly in the ability to feed these graphical devices with enough data to keep them busy. We also discuss strategies for minimizing code duplication while still being able to keep the previously cited algorithms as close to the hardware as possible.

      Speaker: Matthieu Lefebvre (Princeton University (US))
    • 39
      Fast, Parallel and Parametrized Kalman Filters for LHCb upgrade

      By 2020 the LHCb experiment will be upgraded to run at an, by a factor of 5, increased instantaneous luminosity of 2x10^33 cm^-2 s^-1. The hardware trigger will be removed and replaced by a fully software based stage. This will dramatically increase the rate of collisions the software trigger system has to process. Additionally, the increased luminosity will lead to a higher number of tracks that need to be reconstructed per collision.
      The Kalman filter, which is employed to extract the track parameters, currently consumes a major part of the reconstruction time in the trigger software and, therefore, needs further optimization.
      We investigate two, noncompeting, strategies to speed up the current version of the filter. The first one is an algorithm that makes use of several different levels of SIMD instructions on different processor architectures to fit multiple tracks in parallel.
      The second one is to replace the computational costly use of magnetic field and material look up tables, including a Runge-Kutta method for calculating the extrapolation, by simple parametrizations of every extrapolation step.
      For both strategies details and performance studies are presented.

      Speaker: Simon Benedikt Stemmle (Ruprecht-Karls-Universitaet Heidelberg (DE))
    • 40
      Dinner at Atelier de Maitre Albert, Paris

      http://www.ateliermaitrealbert.com

    • 41
      Hough transform based curling track finding for BESIII and COMET multi turn track fitting

      In order to overcome the difficulty brought by the curling charged tracks in the BESIII drift chamber,
      we introduce the Hough transform based tracking method. This method is used as the supplementary to find
      low transverse momentum tracks. This tracking algorithm is realized in C++ in BOSS (BESIII offline software system) and the performance has been checked by both Monte Carlo data and real data. We show that this tracking method could enhance the reconstruction efficiency in the low transverse momentum region.
      Track reconstruction in the COMET Phase-I drift chamber is characterized by large amount of multi-turn curling tracks. We present the method based on the Deterministic Annealing Filter (DAF) and implements a global competition. The method assigns the detector measurements to the track assumption on different turns. This method is studied on the simulated tracks in the COMET drift chamber. We show that it can be a candidate to the assign hits on different track turns.

      Speaker: Prof. Ye Yuan (IHEP, CAS, China)
    • 42
      Flavour Tagging using Deep Neural Networks in Belle II

      Machine learning techniques have been successfully utilized in data processing and analysis for decades.
      Hand in hand with the "deep learning revolution", the importance of Neural Networks in this area is still growing.

      One advantage of employing a Neural Network is that certain features do not have to be engineered manually but are constructed by representations of the network.
      This can enable a higher exploitation of correlations between input variables and also allow an increased variable space compared to other techniques. Modern machine learning frameworks make construction and usage of these techniques even more accessible and utilizable.

      This presentation covers the successful deployment of a Deep Neural Network for the discrimination of neutral $B$ and $\bar{B}$ mesons at the Belle II experiment, the so-called flavour tagging. Implementation and results of this approach will be presented.

      Speaker: Jochen Gemmler (KIT/IEKP)
    • 43
      Track vertex reconstruction with neural networks at the first level trigger of Belle II

      The track trigger is one of the main components of the Belle II first level trigger, taking input from the central drift chamber. It consists of several steps, first combining hits to track segments, followed by a 2D track finding in the transverse plane and finally a 3D track reconstruction. The results of the track trigger are the track multiplicity, the momentum vector of each track and the longitudinal displacement of the vertex of each track ("z-vertex"). The latter allows to reject background tracks from outside of the interaction region and thus to suppress a large fraction of the machine background. The contribution focuses on the track finding step using Hough transforms and on the z-vertex reconstruction with neural networks. We describe the algorithms and show performance studies on simulated events.

      Speaker: Sara Neuhaus
    • 44
      Young Scientist Forum : Weakly supervised classifiers in High Energy Physics

      As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach to classification called weak supervision in which class proportions are the only input into the machine learning algorithm. A simple and general regularization technique is used to solve this non-convex problem. Using one of the most important binary classification tasks in high energy physics - quark versus gluon tagging - we show that weak supervision can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weak supervision is a general procedure that could be applied to a variety of learning problems and as such could add robustness to a wide range of learning problems.

      Speaker: Francesco Rubbo (SLAC National Accelerator Laboratory (US))
    • 45
    • 11:05
      Coffee break
    • 46
      Status of the ACTS (A Common Tracking Software) project

      A Common Tracking Software (ACTS) is a project that attempts to preserve the highly performant track reconstruction code from the first LHC era and prepares the code and concept for long term maintainability and adaption to future architecture. It is based primarily on the ATLAS Common Tracking Software, but has been decoupled to be detector and framework agnostic. ACTS supports several detector geometry backends and a plug-in mechanism for track parameterisation, Identification schema and measurement definitions. It attempts to provide a toolbox for track reconstruction including track propagation, track fitting, pattern recognition, such as a fast simulation module. Dedicated care is taken that the code runs in concurrent environment and complies with the C++14 standard. We present first test suites for different detector geometries, and results testing multithreaded test applications.

      Speaker: Andreas Salzburger (CERN)
    • 47
      HEP.TrkX project: DNNs for HL-LHC online and offline tracking

      Particle track reconstruction in dense environments such as the detectors of the HL-LHC is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions.

      The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs.

      This contribution will describe our initial explorations into this relatively unexplored idea space. We will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy data and LHC-like data generated with ACTS.

      Speaker: Steven Andrew Farrell (Lawrence Berkeley National Lab. (US))
    • 48
      Kalman Filter on IBM's TrueNorth

      As High Energy Physics (HEP) experiments extend the range of attainable luminosities to produce more particle tracks per bunch crossing than ever before, reconstructing the tracks produced in detectors from such interactions becomes more challenging and new methods of computation and data-handling are being explored.
      Additionally, understanding portability of HEP algorithms to future commodity computing architectures is necessary to project future computing costs. A key algorithm in track reconstruction in multiple HEP experiments over the past 50 years is the Kalman filter. Implementing this algorithm in a neuromorphic architecture represents a first step in understanding the benefits and limitation of introducing such a device into the computational resources available to HEP experiments.

      This talk will outline the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limitations of implementing algorithms in neuromorphic neural networks are explored with respect to data encoding, weight representation, latency, and throughput.

      Speaker: Rebecca Carney (Stockholm University (SE))
    • 13:00
      Lunch break
    • 49
      Young Scientist Forum : Identification of Jets Containing b-Hadrons with Recurrent Neural Networks at the ATLAS Experiment

      A novel b-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS Experiment. This talk presents the expected performance of the RNN based b-tagging in simulated $t \bar t$ and high $p_T$ $Z’ \rightarrow b \bar b$ events. The RNN based b-tagging processes properties of tracks associated to jets which are represented in sequences. In contrast to traditional impact-parameter-based b-tagging algorithms which assume the tracks of jets are independent from each other, RNN based b-tagging can exploit the spatial and kinematic correlations of tracks which are initiated from the same b-hadrons. The neural network nature of the tagging algorithm also allows the flexibility of extending input features to include more track properties than can be effectively used in traditional algorithms.

      Speaker: Zihao Jiang (Stanford University (US))
    • 50
      Deep Neural Nets and Bonsai BDTs in the LHCb pattern recognition
      Speaker: Adam Mateusz Dendek (AGH University of Science and Technology (PL))
    • 51
      Machine Learning approach to neutrino experiment track reconstruction : DUNE/ uBooNE/ NOvA

      Neutrino experiments discussed in this talk represent a category of event reconstruction problems very distinct from the collider experiments. The two main differences are: i) the representation of data, in form of 2D, high resolution, image-like projections, and ii) the nature of neutrino interactions observed in the detectors, with their high diversity of topologies and undefined location of the interaction point. In such conditions, two basic features of events have to be reconstructed: flavor and energy of incident neutrino.

      DUNE and MicroBooNE are Liquid Argon Time Projection Chamber (LArTPC) based experiments, while NOvA’s detector is made of liquid scintillator filled cells. The technology choice in these experiments results in the different resolution of recorded “images”, however the basic concepts in the reconstruction and application of the machine learning techniques, are similar. I will discuss those, focusing on the expected DUNE detector conditions and reconstruction approaches currently being developed.

      Speaker: Robert Sulej (FNAL / NCBJ)
    • 52
    • 53
      Discussion on common tracking
    • 54
      satellite meeting in room 101 : CWP Working Group meeting on Software Triggers and Event Reconstruction