This is a workshop on track reconstruction, tracking detectors with embedded intelligence and pattern recognition in sparsely sampled data. While the main focus will be on High Energy Physics (HEP) detectors, the workshop is intended to be inclusive across other disciplines wherever similar problems arise. Synergistic contributions from outside HEP on the topics listed below as well as machine learning, data mining and big data are welcome.
The 2019 edition is the 5th of the Connecting The Dot series after: Berkeley 2015, Vienna 2016, LAL-Orsay 2017 and Seattle 2018 and the 5th of the Workshop on Intelligent trackers after the Berkeley 2010, Pisa 2012, Pennsylvania 2014 and the LAL-Orsay 2017 (joint CTD/WIT).
The workshop is plenary sessions only, with a mix of invited talks and accepted contributions. There will also be a Poster session.
Wifi is available on site, eduroam credentials, from your institution or CERN, are recommended (but not mandatory).
Follow us on twitter @ctdwit , the official hashtag is #ctdwit2019.
The upgraded CMS Pixel Tracker Detector installed during the 2016-2017 extended year-end technical stop consists of four layers in the barrel region and three disks on both sides in the forward region. This made it possible to perform a real fit on quadruplet of hits selected by a Cellular Automaton algorithm to form the pixel tracks. Pixel tracks are an important component of the CMS High-Level Trigger(HLT) reconstruction since they are also used as seeds for additional tracking iterations that make use of the full tracker information. Having a good knowledge of the parameters of the tracks allows a better cleaning and reduces the fake tracks saving CPU time. In this talk, we will describe our experience in implementing two non-iterative and multiple scattering aware fit techniques in the CMS Experiment: the Riemann Fit and Broken Line Fit. We will compare their performances against the standard Run-2 reconstruction both in terms of timing and track-parameter resolution. We will also illustrate our experience in implementing both algorithms on GPU architectures, with a particular emphasis on the engineering work that has been necessary to make the integration.
Particle physicists at the Large Hadron Collider (LHC) investigate the properties of matter at length scales one million times smaller than the atom by colliding together high-energy protons 40 million times per second and observing the decay products of the collisions. ATLAS is one of two general-purpose detectors that reconstruct the interactions and as part of a wide range of physics goals measures production of Higgs bosons and searches for exotic new phenomena including supersymmetry, extra dimension and dark matter.
Selecting the interesting collision events using hardware- and software-based triggers is a major challenge as reconstructing these collisions will only become more challenging as the LHC luminosity increases in future data. The ATLAS Fast TracKer (FTK) is a custom electronics system that performs fast FPGA-based tracking of charged particles for use in trigger decisions. In 2018, an FTK "Slice" covering a portion of the ATLAS detector was installed and commissioned using proton-proton collisions. This presentation will review the track-finding and track-fitting strategies employed by the FTK hardware and present the first tracking performance results for the FTK Slice in 2018 pp collisions data, including hit- and track-finding efficiencies, track parameter resolutions, and track purities.
Track finding with GPGPU-implemented fourth order Runge-Kutta (RK) method is investigated to track electrons from muon decay in the COMET Phase-I drift chamber. The COMET Phase-I experiment is aiming for discovering the neutrinoless, coherent transition of a muon to an electron in the field of an aluminium nucleus, $\mu^-N \rightarrow e^-N$, with a single event sensitivity of $3\times10^{-15}$ . In the COMET drift chamber, about 30-40 \% of signal events are composed of multiple turns where the correct hit assignments to each turn partition are significant in the track finding. Scanning all possible track seeds can resolve the hit-to-track assignment problem with a high robustness about the noise hits, but requires a huge computational cost; initial track seeds $(\theta, z, p_{x}, p_{y}, p_{z})$ have broad uncertainties, so many initial seeds should be tried and compared. In this presentation, this problem of massive computations are mitigated with 1) the parallel computing of RK track propagation, which assigns each track to each GPU block unit, 2) a better initial guess on the track seeds using the Hough transform and the geometrical property of the cylindrical drift chamber. The computation speed enhancement compared to CPU-only calculation will also be provided.
One of the major components of the Belle II trigger system is the neural network trigger. Its task is to estimate the z-Vertex particle tracks observed in the experiments drift chamber. The trigger is implemented on FPGAs to ensure flexibility during operation and leverage their IO capabilities. Meanwhile the implementation has to estimate the vertex in a few hundred nanoseconds to fulfill the requirements of the experiment. A first version of that trigger was operational during the first collisions. While it was able to estimate the vertex, it had some drawbacks regarding the possible throughput and timing closure. These are the focus of this work, which modifies the original design to allow two networks running in parallel and less routing congestion. We conducted a rescheduling of multiply and accumulate which are the basic operations in such networks. While the original design tried to parallelize as much as possible, the rescheduling tries to reduce the number of parallel data transmission by reusing processing modules. This way resource consumption was reduced by 40% for DSPs. To further increase the throughput by operating an additional network in parallel, we investigated the balanced use of SRAM-LUTs and DSPs for multiply and accumulate operations. With the found balancing ratio the trigger is able to operate two neural networks in parallel on the targeted FPGA within the required latency.
In the High Luminosity LHC, planned to start with Run4 in 2026, the ATLAS experiment will be equipped with the Hardware Track Trigger (HTT) system, a dedicated hardware system able to reconstruct tracks in the silicon detectors with short latency. This HTT will be composed of about 700 ATCA boards, based on new technologies available on the market, like high speed links and powerful FPGAs, as well as custom-designed Associative Memories ASIC (AM), which are an evolution of those used extensively in previous experiments and in the ATLAS Fast Tracker (FTK).
The HTT is designed to cope with the expected extreme high luminosity in the so called L0—only scenario, where HTT will operate at the L0 rate (1 MHz). It will provide good quality tracks to the software High-Level-Trigger (HLT), operating as coprocessor, reducing the HLT farm size by a factor of 10, by lightening the load of the software tracking.
All ATLAS upgrade projects are designed also for an evolved, so-called "L0/L1" architecture, where part of HTT is used in a low-latency mode (L1Track), providing tracks in regions of ATLAS at a rate of up to 4MHz, with a latency of a few micro-seconds. This second phase poses very stringent requirements on the latency budget and to the dataflow rates.
All the requirements and the specifications of this system have been assessed. The design of all the components has being reviewed and validated with preliminary simulation studies. After these validations are completed, the development of the first prototypes will start. In this paper we describe the status of the design review, showing challenges and assessed specifications, towards the preparation of the first slice tests with real prototypes.
The growing exploration of machine learning algorithms in particle physics offers new solutions to simulation, reconstruction, and analysis. These new machine learning solutions often lead to increased parallelization and faster reconstructions times on dedicated hardware, here specifically Field Programmable Gate Arrays (FPGAs). We explore the possibility that applications of machine learning simultaneously also solve the increasing computing challenges. Employing machine learning acceleration as a web service, we demonstrate a heterogeneous compute solution for particle physics experiments that requires minimal modification to the current computing model. First results with Project Brainwave by Microsoft Azure, using the Resnet-50 image classification model as an example, demonstrate inference times of approximately 50 (10) milliseconds with our experimental physics software framework using Brainwave as a cloud (edge) service. We also adapt the image classifier, for example, physics applications using transfer learning: jet identification in the CMS experiment and event classification in the Nova neutrino experiment at Fermilab. Solutions explored here are potentially applicable sooner than may have been initially realized. We will also briefly present a short status on real-time machine learning inference on FPGAs for the hardware trigger.
The RD53 Collaboration was established in 2013 to develop the next generation of pixel readout chips for the High Luminosity LHC detector upgrades. This proposal is to extend the scope of the collaboration to design the final pixel readout chip for the ATLAS and CMS upgrade detectors. A common design is proposed that can be fabricated with different pixel matrix sizes. The proposed work plan and resources are presented.
For the HL-LHC, the CMS and ATLAS collaborations have defined the detector geometry of their respective timing layers. Even though both collaborations have selected UFSD in their baseline design, the requirements for the two experiments differ in key aspects such as with pixel size, radiation hardness, number of layers. In this contribution we review the requirements and challenges in the design and production of the sensors for CMS and ATLAS, outlining similarities and differences.
It is foreseen to significantly increase the luminosity of the LHC by upgrading towards the HL- LHC (High Luminosity LHC) in order to harvest the maximum physics potential. Especially the Phase-II-Upgrade foreseen for 2023 will mean unprecedented radiation levels, significantly beyond the limits of the silicon trackers currently employed. All-silicon central trackers are being studied in ATLAS, CMS and LHCb, with extremely radiation hard silicon sensors to be employed on the innermost layers. Within the RD50 Collaboration, a large R&D program has been underway for more than a decade across experimental boundaries to develop silicon sensors with sufficient radiation tolerance for HL-LHC trackers.
Key areas of recent RD50 research include new sensor fabrication technologies such as High-Voltage (HV) CMOS, exploiting the wide availability of the CMOS process in the semiconductor industry at very competitive prices compared to the highly specialised foundries that normally produce particle detectors on small wafers. We also seek for a deeper understanding of the connection between the macroscopic sensor properties such as radiation-induced increase of leakage current, doping concentration and trapping, and the microscopic properties at the defect level. Another strong activity is the development of advanced sensor types like 3D silicon detectors, designed for the extreme radiation levels expected for the vertexing layers at the HL-LHC. A further focus area is the field of Low Gain Avalanche Detectors (LGADs), where a dedicated multiplication layer to create a high field region is built into the sensor. LGADs are characterised by a high signal also after irradiation and a very fast signal compared to traditional silicon detectors with make them ideal candidates for ATLAS and CMS timing layers in the HL-LHC.
We will present the state of the art in several silicon detector technologies as outlined above and at radiation levels corresponding to HL-LHC fluences and partially beyond. As an example, Figure 1 shows a summary of signal measurement results for irradiated LGAD silicon detectors (left), indicating a good radiation tolerance at high bias voltages, and efficiency measurements for 3D detectors (right) irradiated to 1015neq/cm2, demonstrating a high efficiency even at moderate bias voltages.
Fig. 1: Signal measurements on LGAD detectors irradiated to a range of fluences (left) and efficiency measurements for irradiated 3D detectors (right)
The LHC machine is planning an upgrade program, which will smoothly bring the luminosity at about $5-7.5x10^{34}~\mathrm{cm}^{-2}\mathrm{s}^{-1}$ in 2028, to possibly reach an integrated luminosity of 3000-4500 fb$^{-1}$ by the end of 2039. This High-Luminosity LHC scenario, HL-LHC, will require a preparation program of the LHC detectors known as Phase-2 upgrade. The current CMS Outer Tracker, already running beyond design specifications, and recently installed CMS Phase-1 Pixel Detector will not be able to survive the HL-LHC radiation conditions. Thus, CMS will need completely new devices in order to fully exploit the high-demanding operating conditions and the delivered luminosity. The new Outer Tracker should also have trigger capabilities. To achieve such goals, the R&D activities have investigated different options for the Outer Tracker and for the pixel Inner Tracker. The developed solutions will allow including tracking information at the Level-1 trigger. The design choices for the Tracker upgrades are discussed along with some highlights on the technological choices and the R&D activities.
The LHCb Collaboration is planning an Upgrade II, a flavour physics experiment for the
high luminosity LHC era. This will be installed in LS4 (2030) and targets an instantaneous luminosity of 1 to $2 \times 10^{34$} cm$^{-2}$ s$^{-1}$ so as to collect an integrated luminosity of at least 300fb^{-1}. Modest
consolidation of the current experiment will also be introduced in LS3 (2025).
The higher luminosity increases the detector occupancy considerably giving a challenging environment for track reconstruction which is done in Real Time in the trigger at the visible crossing rate of 30 MHz. To meet this challenge two major upgrades to the tracking system are forseen. First a “4D” vertex detector is proposed, which exploits both spatial and time information. The addition of timing information is beneficial both in the assignment of hits to tracks but also allows to separate primary vertices using both spatial and time information. The second major change is to replace the inner part of the downstream tracking stations (which utilize fibers) with silicon technology. For this purpose CMOS and HVCMOS technologies are considered. The new 'Mighty' Tracker will cover an area of 20 square meters, much larger than any silicon detector previously constructed by LHCb.
In this talk the challenges of the LHCb phase 2 upgrade program will be discussed and first results of performance studies presented.
For the Belle II experiment at the SuperKEKB asymmetric electron-positron collider (KEK, Japan) the concept of a first level track trigger, realized by neural networks, is presented. Using the input from a traditional Hough-based 2D track finder, the stereo wire layers of the Belle II Central Drift Chamber are used to reconstruct by neural methods the origin of the tracks along the beam ("z") direction. A z-trigger for Belle II is required to suppress the dominating background of tracks from outside of the collision point. Extensive training and testing using simulated tracks achieve resolutions below 2 cm in the high Pt region, and below 5 cm in the low Pt region, sufficient for efficient background rejection. The importance of the correct drift time input from Belle's Event Time Finder on the optimal spatial resolution of the z-trigger is discussed. Background distributions from first data taking with Belle II are analyzed to optimize suitable z-cuts for an efficient background suppression.
Tracking in high-density environments, such as the core of TeV jets, is particularly challenging both because combinatorics quickly diverge and because tracks may not leave anymore individual "hits" but rather large clusters of merged signals in the innermost tracking detectors. In the CMS collaboration, this problem has been addressed in the past with cluster splitting algorithms, working layer by layer, followed by a pattern recognition step where a high number of candidate tracks are tested. Modern Deep Learning techniques can be used to better handle the problem by correlating information on multiple layers and directly providing proto-tracks without the need of an explicit cluster splitting algorithm. Preliminary results will be presented with ideas on how to further improve the algorithms.
In the transition to Run 3 in 2021, LHCb will undergo a major luminosity upgrade, going from 1.1 to 5.6 expected visible Primary Vertices (PVs) per event, and will adopt a purely software trigger. This has fueled increased interest in alternative highly-parallel and GPU friendly algorithms for tracking and reconstruction. We will present a novel prototype algorithm for vertexing in the LHCb upgrade conditions.
We use a custom kernel to transform the sparse 3D space of hits and tracks into a dense 1D dataset, and then apply Deep Learning techniques to find PV locations. By training networks on our kernels using several Convolutional Neural Network layers, we have achieved better than 90% efficiency with no more than 0.2 False Positives (FPs) per event. Beyond its physics performance, this algorithm also provides a rich collection of possibilities for visualization and study of 1D convolutional networks. We will discuss the design, performance, and future potential areas of improvement and study, such as possible ways to recover the full 3D vertex information.
With the upgrade of the LHC to high luminosity, an increased rate of collisions will place a higher computational burden on track reconstruction algorithms. Typical algorithms such as the Kalman Filter and Hough-like Transformation scale worse than quadratically. However, the energy function of a traditional method for tracking, the geometric Denby-Peterson (Hopfield) network method, can be described as a quadratic unconstrained binary optimization (QUBO) problem. Quantum annealers have shown shown promise in their ability to solve QUBO problems despite being NP-hard. We present a novel approach for track reconstruction by applying a quantum annealing-inspired algorithm to the Denby-Peterson method. We propose additional techniques to divide an LHC event into disjoint subgraphs in order to allow the problem to be embeddable on existing quantum annealing hardware, using multiple anneals to fit tracks to a single event. To accommodate this dimension reduction, we use Bayesian methods and further algorithms to pre- and post-process the data. Results on the TrackML dataset are presented, demonstrating the successful application of quantum annealing-inspired algorithms to the track reconstruction problem.
D-Wave Systems Quantum Annealer (QA) finds the ground state of a Hamiltonian expressed as:
$$ O(a;b;q)= \sum_{i=1}^{N} a_i q_i +\sum_i^{N} \sum_{j < i}^{N} b_{ij} q_i q_j $$
This Quantum Machine Instruction (QMI) is equivalent to a Quadratic Unconstrained Binary Optimization (QUBO) and can be transformed easily into an Ising model or a Hopfield network.
Following Stimpfl-Abele[1], we expressed the problem of classifying track seeds (doublets and triplets) as a QUBO, where the weights depend on physical properties such as the curvature, 3D orientation, and length.
We generated QUBOs that encode the pattern recognition problem at the LHC using the TrackML dataset[2] and solved them using qbsolv[3] and the D-Wave Leap Cloud Service[4]. Those early experiments achieved a performance in terms of purity, efficiency, and TrackML score that exceeds 95%.
Our goal is to develop a strategy appropriate for HL-LHC track densities by using techniques including improved seeding algorithms and geographic partitioning. We also plan to refine our model in order to reduce execution time and to boost performance.
[1] "Fast track finding with neural networks - ScienceDirect." 1 Apr. 1991, https://www.sciencedirect.com/science/article/pii/001046559190048P. Accessed 29 Oct. 2018.
[2] "TrackML Particle Tracking Challenge | Kaggle." 17 Jan. 2018, https://www.kaggle.com/c/trackml-particle-identification. Accessed 29 Oct. 2018.
[3] "Partitioning Optimization Problems for Hybrid Classical ... - D-Wave." 9 Jan. 2017, https://www.dwavesys.com/sites/default/files/partitioning_QUBOs_for_quantum_acceleration-2.pdf. Accessed 29 Oct. 2018.
[4] "D-Wave Leap." https://cloud.dwavesys.com/leap/. Accessed 29 Oct. 2018.
at Umbracle
To address the unprecedented scale of HL-LHC data, the HEP.TrkX project has been investigating a variety of machine learning approaches to particle track reconstruction. The most promising of these solutions, a graph neural network, processes the event as a graph that connects track measurements (detector hits corresponding to nodes) with candidate line segments between the hits (corresponding to edges). This architecture enables separate input features for edges and nodes, ultimately creating a hidden representation of the graph that is used to turn edges on and off, leaving only the edges that form tracks. Due to the large scale of this graph for an entire LHC event, we present new methods that allow the event graph to be scaled to a computationally reasonable size. We report the results of the graph neural network on the TrackML dataset, detailing the effectiveness of this model on event data with large pileup. Additionally, we propose post-processing methods that further refine the result of the graph neural network, ultimately synthesizing an end-to-end machine learning solution to particle track reconstruction.
The P̅ANDA (antiP̅roton ANnihilation at Drmstadt) experiment at the future Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany, will investigate the behavior of QCD in the mass range of the charmonium. As a fixed target experiment, most of the generated particle will have a forward boost. Therefore, the PANDA detector consists of a Central Spectrometer (CS), directly around the interaction point and a Forward Spectrometer (FS) measuring the forward going particles. The FS is located downstream of the interaction region and measures particles at small polar angles θ below 5° in the vertical and 10° in the horizontal plane. Its magnetic field with a maximum bending power of 2 Tm is provided by a dipole magnet. For the measurement of particle momenta based on the deflection of their trajectories in the magnetic field of the FS dipole magnet, Forward Tracker Stations (FTS) are foreseen. The design of the FTS is based on self-supporting straw tubes arranged in three pairs of planar stations. One pair (FT1, FT2) is placed upstream of the FS dipole magnet, the second pair (FT5, FT6) downstream of the magnet, and the third pair (FT3, FT4) is placed inside the gap of the magnet. Each tracking station consists of four double layers of straws: the first and the fourth one are vertical straws and the two intermediate are composed of straws tilted at +5° and −5°, respectively.
We apply machine/deep learning methods as a track finding algorithm at the FTS.
The problem is divided into two steps:
The first step is to build track segments in three different parts of the FTS, namely FT1, FT2, and FT3. Two models were tested in this step so far, the first model relies on unsupervised clustering algorithm to find track segments, and the second model relies on supervised learning to combine hit pairs/triplets to form track segments in the three different parts of the FTS.
The second step is to join the track segments from the different parts of the FTS to form a full track, and is based on Recurrent Neural Network (RNN). The RNN is used as a binary classifier that outputs 1 if the combined track segments are a true track, and 0 if the track segments do not match. The performance of the algorithm is judged based on the purity, efficiency and the ghost ratio of the reconstructed tracks. The purity specifies which fraction of hits in one track come from the correct particle. The correct particle is the particle, which produces the large majority of hits in the track. The efficiency is defined as the ratio of the number of correctly reconstructed tracks to all generated tracks present. The ghost ratio is defined as the number of impure tracks (which have a large fraction of hits from more than one track) to all tracks present.
Starting from 2020, future development projects for the Large Hadron Collider will constantly bring nominal luminosity increase, with the ultimate goal of reaching a peak luminosity of 5 · 1034 cm−2 s−1 for ATLAS and CMS experiments planned for the High Luminosity LHC (HL-LHC) upgrade. This rise in luminosity will directly result in an increased number of simultaneous proton collisions (pileup), up to 200, that will pose new challenges for the CMS detector and, specifically, for track reconstruction in the Silicon Pixel Tracker.
One of the first steps of the track finding workflow is the creation of track seeds, i.e. compatible pairs of hits from different detector layers, that are subsequently fed to to higher level pattern recognition steps. However the set of compatible hit pairs is highly affected by combinatorial background resulting in the next steps of the tracking algorithm to process a significant fraction of fake doublets.
A possible way of reducing this effect is taking into account the shape of the hit pixel cluster to check the compatibility between two hits. To each doublet is attached a collection of two images built with the ADC levels of the pixels forming the hit cluster. Thus the task of fake rejection can be seen as an image classification problem for which Convolutional Neural Networks (CNNs) have been widely proven to provide reliable results.
In this work we present our studies on CNNs applications to the filtering of track pixel seeds. We will show the results obtained for simulated event reconstructed in CMS detector, focussing on the estimation of efficiency and fake rejection performances of our CNN classifier. The results from a first integration within the CMS tracking software will be also discussed.
At the LHC, many proton-proton collisions are happening at a single beam crossing which leads to thousands of particles emerging from the interaction region and vast amount of data to be analyzed by the reconstruction software.
The finding of trajectories from charged particles in the tracking devices is a particularly challenging task due to two main factors. Firstly, deciding whether a given set of hits belong to the same trajectory is an under-specified task and state of the art models discard combinations only at a later stage when adding more hits (track following). Secondly, assuming a nearly perfect decision function, the construction of combinatorics to check hits compatibility using this decision function is computationally intensive and will grow exponentially in the HL-LHC.
We propose a framework for Similarity Hashing and Learning for Tracks Reconstruction (SHLTR) where multiple regions of the detector are reconstructed in parallel with minimal fake rate. We consider hashing to reduce the detector search space into "buckets" where the purity of the sub-regions is increased using locality sensitivity in the feature space. A neural network selects the valid combinations in the buckets and builds up full trajectories by connected components search independently of global positions of the hits and detector geometry. The whole process occurs simultaneously in the N regions of the detector and curved particles are found by allowing buckets overlap.
The framework succeeds in addressing the two main tracking challenges : decision making and computation scale in mu 200 datasets. We present first results of such a track reconstruction chain including efficiency, fake estimates and computational performances.
Report from RAMP challenge November on fast vertexing
As a part of the future circular collider conceptual design study for hadron–hadron physics (FCC-hh), conceptual designs of the tracking detectors are being developed to facilitate the accurate measurement of particle products resulting from the 100-TeV collisions.
In the next decade, high-luminosity upgrades to the LHC will confront detectors with an order of magnitude increase in particle collisions. This will push track reconstruction software and hardware beyond current capabilities. The current track reconstruction approaches based on track seeding and track following allow for large contingency and hence are not optimal in terms of computational efficiency. Early fake classification, especially during the first stages of track reconstruction offer viable opportunities for a faster, more efficient reconstruction. In an attempt to harness multiple advantages in a single approach, we investigate the applicability of Deep Neural Networks (DNNs) to the classification of track seeds. A DNN offers not just inherent parallelizability and execution on dedicated hardware, but also the possibility to improve rejection rates for improper seeds and thereby free up time for any actual track reconstruction. This approach is underpinned by the surge of high performance and free to use deep learning frameworks, which have matured over the last years.
A full replacement of the muon trigger system in the CMS (Compact Muon Solenoid) detector is envisaged for operating at the maximum instantaneous luminosities expected in HL-LHC (High Luminosity Large Hadron Collider) of about $5-7.5x10^{34} cm^{-2}s^{-1}$. Under this scenario, the new on detector electronics that is being designed for the DT (Drift Tubes) detector will forward all the chamber information at its maximum time resolution. A new trigger system based on the highest performing FPGAs is being designed and will be capable of providing precise muon reconstruction and Bunch Crossing identification. An algorithm easily portable to FPGA architecture has been designed to implement the trigger primitive information from the DT detector. This algorithm has to reconstruct muon segments from single wire DT hits which contain a time uncertainty of 400 ns due to the drift time in the cell. This algorithm provides the maximum resolution achievable by the DT chambers, bringing closer to the hardware system the offline performance capabilities. The results of the simulation and of the first implementations in the new electronics test bench will be shown.
The upcoming PANDA (anti-Proton ANnihilation at DArmstadt) experiment at FAIR (Facility for Anti-proton and Ion Research) offers unique possibilities for performing hyperon physics such as extraction of spin observables. Due to their relatively long-lived nature, the displaced decay vertices of hyperons impose a particular challenge on the track reconstruction and event building. The foreseen high luminosity and high beam momenta at PANDA requires new advanced tracking algorithms for successfully identifying the hyperon events. The purely software based event selection of PANDA puts high demands on the online reconstruction algorithms. A fast, versatile, modular and dynamic approach to track reconstruction and event building is required. Such a scheme is currently under development in Uppsala. This talk will address the reconstruction algorithms used in the scheme such as the cellular automaton and the Riemann fit. The computing requirements and challenges will also be discussed.
The LHCb experiment is dedicated to the study of the c- and b-hadrons decays, including long living particles such as Ks and strange baryons (Lambda, Xi, etc... ). These kind of particles are difficult to reconstruct from LHCb tracking systems since they escape the detection in the first tracker. In this talk the performance of the tracking algorithms for detecting long living particles are studied and compared with other methods. Special emphasis is laid on the tracking reconstruction achievements with the new LHCb upgrade detector.
The alignment of the ATLAS Inner Detector is performed with a track-based algorithm. The aim of the detector alignment is to provide an accurate description of the detector geometry such that track parameters are accurately determined and bias free.
A new analysis with a detailed scrutiny of the track-hit residuals allows to study the deformation shape of the Pixels and IBL modules. The sensor distortion can result in track-hit residual biases of up to 10 microns within a given module. Their shape is parametrized with Legendre polynomials and used to correct the hit positioning in the track fitting procedure.
The detector alignment is validated and improved by studying resonance decays (J/psi, Upsilon and Z to mu+mu-), as well as using information from the calorimeter system with the E/p method from electrons. The detail study of these resonances (together with the properties of the tracks of their decay products) allows to detect and correct for alignment weak modes such as detector curls and radial deformations that may bias the momentum and/or the impact parameter. The weak mode correcting maps and their magnitude are then used to realign the detector with increased accuracy.
Timepix and Timepix3 Detectors are 256x256 hybrid active pixel detectors, capable of tracking ionizing particles as isolated clusters of pixels. To efficiently analyze such clusters at potentially high rates, we introduce multiple randomized pattern recognition algorithms inspired by computer vision. Offering desirable probabilistic bounds on accuracy and complexity, the presented methods are well-suited for use in real-time applications, and some may even be modified to tackle trans-dimensional problems. In older Timepix detectors which do not support data-driven acquisition, they have been shown to correctly separate clusters of overlapping tracks. In modern Timepix3 detectors, simultaneous acquisition of ToA+ToT pixel data enables reconstruction of the depth coordinate, transitioning from 2D to 3D point clouds. The presented algorithms have been tested on simulated inputs, test beam data from the Heidelberg Ion therapy Center and the Super-Proton-Synchrotron and were applied to data acquired in the MoEDAL and ATLAS experiments at CERN.
Having started data taking at the beginning of 2018, the Belle II experiment is a substantial upgrade of the Belle detector, operating at the SuperKEKB collider at the KEK laboratory in Japan. The experiment represents the cumulative effort from the collaboration of experimental and detector physics, computing, and software development. Taking everything learned from the previous Belle experiment, which ran from 1998 to 2010, Belle II aims to probe deeper than ever before into the field of heavy quark physics. By achieving an integrated luminosity of 50 ab-1 and accumulating 50 times more data than the previous experiment across its lifetime, the Belle II experiment will push the high precision frontier of high energy physics. Both, accelerator and detector, have already successfully completed the commissioning phase, recording the first electron versus positron collisions in April 2018. This presentation will give an overview of the key accelerator and detector components that make the Belle II experiment possible, with emphasis on the achieved pixel detector performance during the Phase 2 commissioning stage.
Building particle tracks is the most computationally intense step of event reconstruction at the LHC. With the increased instantaneous luminosity and associated increase in pileup expected from the High-Luminosity LHC, the computational challenge of track finding and fitting requires novel solutions. The current track reconstruction algorithms used at the LHC are based on Kalman-filter methods that achieve good physics performance. By adapting the Kalman-filter techniques for use on many-core SIMD architectures such as the Intel Xeon and Intel Xeon Phi and (to a limited degree) NVIDIA GPUs, we are able to obtain significant speedups and comparable physics performance.
Recent work has focused on integrating the algorithm into the CMSSW environment for use in the CMS High Level Trigger during Run 3 of the LHC. New optimizations including the removal of hits from out-of-time pileup and improvements on the ranking of the hit candidates have further increased the speedup of the algorithm and improved the track-building efficiency. The use of advanced profiling techniques have identified additional areas to target for optimization. The current structure and performance of the code and future plans for the algorithm will be discussed.
In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous readout of minimum bias Pb-Pb collisions instead of around 1 kHz triggered readout..
The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage.
The huge amount of data requires a significant compression to store all recorded events.
We are aiming for a factor of 20 for the TPC, which is one of the main challenges during synchronous reconstruction.
In addition, the reconstruction will run online, processing 50 times more collisions than at present, while thereby yielding results comparable to current offline reconstruction.
All this poses new challenges for the tracking, including the continuous TPC readout, more overlapping collisions, no a priori knowledge of the primary vertex position and of location-dependent calibration during the synchronous phase, identification of low-momentum looping tracks, and a distorted refit to improve track model entropy coding.
At the last workshop, we presented the fast new TPC tracking for Run 3, which matches the physics performance of the current Run 2 offline tracking.
It leverages the potential of hardware accelerators via the OpenCL and CUDA APIs in a shared source code for CPUs and GPUs for both reconstruction stages.
Porting more reconstruction steps like the remainder of the TPC reconstruction and tracking for other detectors to GPU will shift the computing balance from traditional processors towards GPUs.
This presentation will focus on the global tracking strategy, including the ITS and TRD detectors, on offloading more reconstruction steps onto GPU, and on our approaches to achieving the necessary data compression.
Computing time is becoming a key issue for tracking algorithms both online and off-line. Programming using adequate data structures can largely improve the efficiency of the reconstruction in terms of time response. We propose using one such data structure, called R-tree, that performs a fast, flexible and custom spatial indexing of the hits based on a neighbourhood organization. The overhead required to prepare the data structure shows to be largely compensated by the efficiency in the search of hits that are candidate to belong to the same track when events present a large number of hits. The study, including different indexing approaches, is performed for a generic pixel tracker largely inspired in the upgrade of the LHCb vertex locator with a backwards reconstruction algorithm of the cellular automaton type.
The CMS experiment at the LHC is designed to study a wide range of high energy physics phenomena. It employs a large all-silicon tracker within a 3.8T magnetic solenoid, which allows precise measurements of transverse momentum (pT) and vertex position.
This tracking detector will be upgraded to coincide with the installation of the High-Luminosity LHC, which will provide luminosities of up to about 10^35 cm^2 /s to CMS, or 200 collisions per 25 ns bunch crossing. This new tracker must maintain the nominal physics performance in this more challenging environment. Novel tracking modules that utilise closely spaced silicon sensors to discriminate on track pT have been developed and allow the readout of only hits compatible with pT > 2-3 GeV tracks to off-detector trigger electronics. This would allow the use of tracking information at the Level-1 trigger of the experiment, a requirement to keep the Level-1 triggering rate below the 750 kHz target, while maintaining physics sensitivity.
This talk presents a concept for an all FPGA based track finder using a fully time-multiplexed architecture. Hardware demonstrators have been assembled to prove the feasibility and capability of such a system. The performance for a variety of physics scenarios will be presented, as well as the proposed scaling of the demonstrators to the final system and new technologies.
The tracking system of the Belle II consists of the silicon vertex detector and cylindrical drift chamber, both operating in a magnetic field created by the main solenoid of 1.5 T and final focusing magnets. The drift chamber consists of 56 layers of sense wires, arranged in interleaved axial and stereo superlayers, to assist track finding and provide full 3D tracking. The drift chamber serves as the main detector for track finding in Belle II. Two distinct track finding algorithms, local and global, are employed for this purpose and the found track candidates are combined, fitted and extrapolated into the silicon vertex detector using the combinatorial Kalman filter algorithm. A distinct feature of the Belle II tracking is its modularity allowing for changes in the algorithm sequence, to optimize the overall performance. Another feature is a use of multivariate estimators, for noise filtering and track-candidate selection. The reconstruction is tested on e+ and e- collision data collected in phase 2 operation during spring 2018. The good performance of the drift chamber and tracking reconstruction allowed rediscovery of many physics channels and was essential for tuning of the accelerator parameters
Conformal tracking is the innovative track reconstruction strategy adopted for the detector designed for CLIC.
It features a pattern recognition in a conformal-mapped plane, where helix trajectories of charged particles in a magnetic field are projected into straight lines, followed by a Kalman-Filter-based fit in global space.
The nearest neighbour search is optimized by means of fast kdtrees and a cellular automaton is used to reconstruct the linear paths.
Being based exclusively on the spatial coordinates of the hits, this algorithm is adaptable to different detector designs and beam conditions. In the detector at CLIC, it also profits from the low-mass silicon tracking system, which reduces complications from multiple scattering and interactions.
Full-simulation studies are performed with the iLCSoft framework developed by the Linear Collider Community, in order to validate the algorithm and assess its performances, also in the presence of beam-induced backgrounds expected at 3 TeV CLIC.
In this talk, recent developments and new features of the track reconstruction chain will be discussed. Results will be shown for isolated tracks and di-jet events.
The High Luminosity LHC (HL-LHC) aims to increase the LHC data-set by an order of magnitude in order to increase its potential for discoveries. Starting from the middle of 2026, the HL-LHC is expected to reach the peak instantaneous luminosity of $7.5\times 10^{34}\text{cm}^{-2}\text{s}^{-1}$ which corresponds to up to about 200 inelastic proton-proton collisions per bunch crossing. To cope with the large radiation doses and high pileup, the current ATLAS Inner Detector will be replaced with a new all-silicon Inner Tracker. In this talk the expected tracking performance of the HL-LHC tracker is presented. Impact of tracking on physics object reconstruction is discussed.
Beginning in 2021, the upgraded LHCb experiment will use a triggerless readout system collecting data at an event rate of 30 MHz. A software-only High Level Trigger will enable unprecedented flexibility for trigger selections. During the first stage (HLT1), a sub-set of the full offline track reconstruction for charged particles is run to select particles of interest based on single or two-track selections. After this first stage, the event rate is reduced by at least a factor 30. Track reconstruction at 30 MHz represents a significant computing challenge, requiring a renovation of current algorithms and the underlying hardware. In this talk, we present the approach of executing the full HLT1 chain on GPUs. This includes decoding the raw data, clustering of hits, pattern recognition, as well as track fitting. We will discuss the infrastructure of our software project and the design of HLT1 algorithms optimized for many-core architectures. Both the computing and physics performance of the full HLT1 chain will be presented. Ses. 2
The performances on data of the silicon standalone track finder based on the Sector Map concept developed for the Belle II vertex detector (VXD) will be presented.
The Belle II VXD is a combined tracking system composed by two layers of DEPFET pixel detectors married with four layers of double sided silicon strip sensors (SVD).
The VXD is recording e+ e- collisions occurring at the interaction points of SuperKEKB, the asymmetric e+ e- collider operating at Y(4S) mass peak. The track finder algorithm must operate in a very harsh environment characterized by very high occupancy stemming from beam background and it must be very efficient in finding very soft charged tracks (total momentum around 50 MeV/c) that are in the lowest range of the momentum spectrum of the B mesons decay products. To achieve this demanding goal a set of novel algorithms that are fully exploiting the excellent time and spatial resolution of the SVD had been developed and tuned. The experience gained so far on a sample of half an inverse femtobarn collected on 2018 with a section of the full VXD will be presented together with the very first results from the phase 3 run that will start on March 2019.
To explore what our universe is made of, scientists at CERN are colliding protons, essentially recreating mini big bangs, and meticulously observing these collisions with intricate silicon detectors.
While orchestrating the collisions and observations is already a massive scientific accomplishment, analyzing the enormous amounts of data produced from the experiments is becoming an overwhelming challenge.
Event rates have already reached hundreds of millions of collisions per second, meaning physicists must sift through tens of petabytes of data per year. And, as the resolution of detectors improve, ever better software is needed for real-time pre-processing and filtering of the most promising events, producing even more data.
To help address this problem, a team of Machine Learning experts and physics scientists working at CERN (the world largest high energy physics laboratory), has partnered with Kaggle and prestigious sponsors to answer the question: can machine learning assist high energy physics in discovering and characterizing new particles?
Specifically, in this competition, you’re challenged to build an algorithm that quickly reconstructs particle tracks from 3D points left in the silicon detectors. This challenge consists of two phases:
The Accuracy phase has run on Kaggle from May to 13th August 2018 (Winners to be announced by end September). Here we’ll be focusing on the highest score, irrespective of the evaluation time. This phase is an official IEEE WCCI competition (Rio de Janeiro, Jul 2018).
The Throughput phase will run on Codalab starting in September 2018. Participants will submit their software which is evaluated by the platform. Incentive is on the throughput (or speed) of the evaluation while reaching a good score. This phase is an official NIPS competition (Montreal, Dec 2018).
All the necessary information for the Accuracy phase is available here on Kaggle site. The overall TrackML challenge web site is there.
Remote sensing for earth observation is a tool that provides large information to monitor the vegetation, ice, water, sea levels, atmosphere temperature and wind, among other physical phenomena. This information comes in the form of multi and hyper spectral images covering the whole spectrum of emission of the sun. This information needs to be prepared and accommodated to be used, needs to be optically corrected, thermally calibrated, and co-registered to localisation coordinates in order to be used for physical parameter extraction. Computing model and processing steps has large similarities to those in particle physics.
The CMOS sensors are emerging as one of the main candidate technologies for future tracking detectors in high luminosity colliders. Its capability of integrating the sensing diode into the CMOS wafer hosting the front-end electronics allows for reduced noise and higher signal sensitivity. They are suitable for high radiation environments due to the possibility of applying high depletion voltage and the availability of relatively high resistivity substrates. The use of a CMOS commercial fabrication process leads to their cost reduction and allows faster construction of large area detectors. A general perspective of the state of the art of these devices will be given in this contribution as well as a summary of the main developments carried out with regard to these devices in the framework of the CERN RD50 collaboration.
A variety of Beyond the Standard Model (BSM) theories predict new particles with macroscopic lifetimes of $c\tau\geq {\cal O}(1~{\rm mm})$ that could be created in proton-proton collisions at the Large Hadron Collider (LHC). Such theories often give rise to signatures that require dedicated tracking and vertexing techniques beyond conventional tracking algorithms. In this talk, a variety of unconventional tracking and vertexing techniques for long-lived particle searches in ATLAS will be discussed, including a secondary vertex algorithm for heavy neutral particles decaying to hadronic and leptonic final states within the ATLAS Inner Detector, a disappearing charged track reconstruction technique, which extends to trajectories with as few as three position measurements in the ATLAS pixel detector, and a new region-of-interest track seeding technique for low momentum tracking to target tracks originating from the long-lived charged particle decays within the Inner Detector. The performance of these unconventional tracking techniques will be discussed in the context of a variety of BSM theories in preparation for full Run 2 analyses with the ATLAS detector.
We present a novel 4D fast tracking system, based on rad-hard pixel detectors and front-end electronics, capable of reconstructing four dimensional particle trajectories in real time using precise space and time information of the hits. The fast track finding system that we are proposing is designed for the high-luminosity phase of LHC and has embedded tracking capabilities. A massively parallel algorithm for fast track reconstruction has been implemented in commercial FPGA using a pipelined architecture. We will present studies of expected tracking performance for a possible pixel detector of a future upgrade of the LHCb experiment and first results based on a FPGA-based hardware prototype.
The increasing track multiplicity in ATLAS poses new challenges for primary vertex reconstruction software, reaching to over 70 inelastic proton-proton collisions per beam crossing during Run-2 of the LHC and even more extreme vertex density in the next upcoming Runs. One way to get around these challenges, is to take a global approach to the track assignment to primary vertices, as opposed to the Iterative Vertex Finder procedure that was used in Run-2.
The Adaptive Multi Vertex Finder, a true multi-vertex implementation of the adaptive vertex finder algorithm is one such approach, which deploys the same adaptive vertex fitting technique as the Iterative Vertex Finder procedure, but fits for N vertices in parallel to take into account the vertex structure of the event.
This talk summarises the optimization and expected performance of the Adaptive Multi Vertex Finder for conditions foreseen for Run-3 of the LHC. These studies are coupled to a newly optimised vertexing seeder and further performance studies in the ITk scenario.
A novel combination of data analysis techniques is proposed for the
reconstruction of all tracks of primary charged particles, as well as of
daughters of displaced vertices (decays, photon conversions, nuclear
interactions), created in high energy collisions. Instead of performing a
classical trajectory building or an image transformation, an efficient use of
both local and global information is undertaken while keeping competing choices
open.
The measured hits of adjacent tracking layers are clustered first with help of
a mutual nearest neighbor search in angular distance. The resulted chains of
connected hits are used as initial clusters and as input for cluster analysis
algorithms, such as the robust $k$-medians clustering. This latter proceeds by
alternating between the hit-to-track assignment and the track-fit update steps,
until convergence. The calculation of the hit-to-track distance and that of the
track-fit $\chi^2$ is performed through the global covariance of the measured
hits. The clustering is complemented with elements from a more sophisticated
Metropolis-Hastings MCMC algorithm, with the possibility of adding new track
hypotheses or removing unnecessary ones.
Simplified but realistic models of today's silicon trackers, including the
relevant physics processes, are employed to test and study the performance
(efficiency, purity) of the proposed method as a function of the particle
multiplicity in the collision event.
In the ATLAS experiment at the LHC, the primary-track reconstruction algorithm utilizes iterative track-finding seeded from combinations of silicon detector measurements. As all realistic combinations of space-points have been made, there are a number of track candidates where space-points overlap, or have been incorrectly assigned. This necessitates an ambiguity-solving stage. In the ambiguity solver, track candidates considered to create the reconstructed track collection are processed individually in descending order of a track score, favouring tracks with a higher score. The scoring algorithm depends on simple measures of the track quality, which includes the $\chi^2$ of the track fit, cut on hits and holes on track and merged clusters. In this talk, modifications to the ambiguity solver aimed at improving track reconstruction in the dense core of high pT jets will be discussed. These modification include application of machine learning techniques to the scoring function and for classification of merged tracks.