Conveners
Track 2: Data Analysis - Algorithms and Tools
- Tommaso Dorigo (Universita e INFN, Padova (IT))
- David Rousseau (LAL-Orsay, FR)
Track 2: Data Analysis - Algorithms and Tools
- David Rousseau (LAL-Orsay, FR)
- Tommaso Dorigo (Universita e INFN, Padova (IT))
Track 2: Data Analysis - Algorithms and Tools
- Jean-Roch Vlimant (California Institute of Technology (US))
- Andy Buckley (University of Glasgow (GB))
Track 2: Data Analysis - Algorithms and Tools
- Andy Buckley (University of Glasgow (GB))
- Jean-Roch Vlimant (California Institute of Technology (US))
Track 2: Data Analysis - Algorithms and Tools
- Jennifer Ngadiuba (CERN)
- Wouter Verkerke (Nikhef National institute for subatomic physics (NL))
Track 2: Data Analysis - Algorithms and Tools
- Jennifer Ngadiuba (CERN)
- Wouter Verkerke (Nikhef National institute for subatomic physics (NL))
Track 2: Data Analysis - Algorithms and Tools
- Oleg Kalashev (Institute for Nuclear Research RAS)
- Kazuhiro Terao (SLAC)
Track 2: Data Analysis - Algorithms and Tools
- Kazuhiro Terao (SLAC)
- Oleg Kalashev (Institute for Nuclear Research RAS)
In the High-Luminosity Large Hadron Collider (HL-LHC), one of the most challenging computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction. The methods currently in use at the LHC are based on the Kalman filter. Such methods have shown to be robust and to provide good physics performance, both in the trigger and offline. In order to...
ConformalTracking is an open source library created in 2015 to serve as a detector independent solution for track reconstruction in detector development studies at CERN. Pattern recognition is one of the most CPU intensive tasks of event reconstruction at present and future experiments. Current tracking programs of the LHC experiments are mostly tightly linked to individual detector...
To address the unprecedented scale of HL-LHC data, the HEP.TrkX project has been investigating a variety of machine learning approaches to particle track reconstruction. The most promising of these solutions, a graph neural network, processes the event as a graph that connects track measurements (detector hits corresponding to nodes) with candidate line segments between the hits (corresponding...
Machine learning methods are integrated into the pipelined first level track trigger of the upgraded flavor physics experiment Belle II in Tsukuba, Japan. The novel triggering techniques cope with the severe background conditions coming along with the upgrade of the instantaneous luminosity by a factor of 40 to $\mathcal{L} = 8 \times 10^{35} \text{cm}^{โ2} \text{s}^{โ1}$. Using the precise...
With the upgrade of the LHC to high luminosity, an increased rate of collisions will place a higher computational burden on track reconstruction algorithms. Typical algorithms such as the Kalman Filter and Hough-like Transformation scale worse than quadratically. However, the energy function of a traditional method for tracking, the geometric Denby-Peterson (Hopfield) network method, can be...
Machine learning is becoming ubiquitous across HEP. There is great potential to improve trigger and DAQ performances with it. However, the exploration of such techniques within the field in low latency/power FPGAs has just begun. We present hls4ml, a user-friendly software, based on High-Level Synthesis (HLS), designed to deploy network architectures on FPGAs. As a case study, we use hls4ml...
Finding tracks downstream of the magnet at the earliest LHCb trigger level is not part of the baseline plan of the Upgrade trigger, on account of the significant CPU time required to execute the search. Many long-lived particles, such as Ks and strange baryons, decay after the vertex track detector (VELO), so that their reconstruction efficiency is limited. We present a study of the...
In the transition to Run 3 in 2021, LHCb will undergo a major luminosity upgrade, going from 1.1 to 5.6 expected visible Primary Vertices (PVs) per event, and will adopt a purely software trigger. This has fueled increased interest in alternative highly-parallel and GPU friendly algorithms for tracking and reconstruction. We will present a novel prototype algorithm for vertexing in the LHCb...
The Belle II experiment, beginning data taking with the full detector in early 2019, is expected to produce a volume of data fifty times that of its predecessor. With this dramatic increase in data comes the opportunity for studies of rare previously inaccessible processes. The investigation of such rare processes in a high data volume environment requires a correspondingly high volume of...
Generative models, and in particular generative adversarial networks, are gaining momentum in hep as a possible way to speed up the event simulation process. Traditionally, gan models applied to hep are designed to return images. On the other hand, many applications (e.g., analyses based on particle flow) are designed to take as input lists of particles. We investigate the possibility of using...
At this moment the most convenient approach in electromagnetic shower generation is Monte-Carlo simulation produced by software packages like GEANT4. However, one of the critical problems of Monte-Carlo production is that it is extremely slow since it involves simulation of numerous subatomic interactions.
Recently, generative adversarial networks(GANs) addressed speed issue in the simulation...
The increasing luminosities of future LHC runs and next generation of collider experiments will require an unprecedented amount of simulated events to be produced. Such large scale productions are extremely demanding in terms of computing resources. Thus new approaches to event generation and simulation of detector responses are needed. In LHCb the simulation of the RICH detector using the...
An extensive upgrade programme has been developed for LHC and its experiments, which is crucial to allow the complete exploitation of the extremely high-luminosity collision data. The programme is staggered in two phases, so that the main interventions are foreseen in Phase II.
For this second phase, the main hadronic calorimeter of ATLAS (TileCal) will redesign its readout electronics but the...
The Belle II experiment at the SuperKEKB e+e- collider has completed its first-collisions run in 2018. The experiment is currently preparing for physics data taking in 2019. The electromagnetic calorimeter of the Belle II detector consists of 8,736 Thallium-doped CsI crystals with PIN-photodiode readout. Each crystal is equipped with waveform digitizers that allow the extraction of energy,...
The ATLAS experiment records data from the proton-proton collisions produced by the Large Hadron Collider (LHC). The Tile Calorimeter is the hadronic sampling calorimeter of ATLAS in the region |ฮท| < 1.7. It uses iron absorbers and scintillators as active material. Jointly with the other calorimeters it is designed for reconstruction of hadrons, jets, tau-particles and missing transverse...
We introduce a novel implementation of a reinforcement learning
algorithm which is adapted to the problem of jet grooming, a
crucial component of jet physics at hadron colliders. We show
that the grooming policies trained using a Deep Q-Network model
outperform state-of-the-art tools used at the LHC such as
Recursive Soft Drop, allowing for improved resolution of the mass
of boosted objects....
A large part of the success of deep learning in computer science can be attributed to the introduction of dedicated architectures exploiting the underlying structure of a given task. As deep learning methods are adopted for high energy physics, increasing attention is thus directed towards the development of new models incorporating physical knowledge.
In this talk, we present a network...
I describe a novel interactive virtual reality visualization of the Belle II detector at KEK and the animation therein of GEANT4-simulated event histories. Belle2VR runs on Oculus and Vive headsets (as well as in a web browser and on 2D computer screens, in the absence of a headset). A user with some particle-physics knowledge manipulates a gamepad or hand controller(s) to interact with and...
Multivariate analyses in particle physics often reach a precision such that its uncertainties are dominated by systematic effects. While there are known strategies to mitigate systematic effects based on adversarial neural nets, the application of Boosted Decision Trees (BDT) so far had to ignore systematics in the training.
We present a method to incorporate systematic uncertainties into a...
Analysis in high-energy physics usually deals with data samples populated from different sources. One of the most widely used ways to handle this is the sPlot technique. In this technique the results of a maximum likelihood fit are used to assign weights that can be used to disentangle signal from background. Some events are assigned negative weights, which makes it difficult to apply machine...
Variable-dependent scale factors are commonly used in HEP to improve shape agreement of data and simulation. The choice of the underlying model is of great importance, but often requires a lot of manual tuning e.g. of bin sizes or fitted functions. This can be alleviated through the use of neural networks and their inherent powerful data modeling capabilities.
We present a novel and...
Complex computer simulations are commonly required for accurate data modelling in many scientific disciplines, including experimental High Energy Physics, making statistical inference challenging due to the intractability of the likelihood evaluation for the observed data. Furthermore, sometimes one is interested on inference drawn over a subset of the generative model parameters while taking...
A large number of physics processes as seen by ATLAS at the LHC manifest as collimated, hadronic sprays of particles known as โjetsโ. Jets originating from the hadronic decay of a massive particle are commonly used in searches for both measurements of the Standard Model and searches for new physics. The ATLAS experiment has employed machine learning discriminants to the challenging task of...
In radio-based physics experiments, sensitive analysis techniques are often required to extract signals at or below the level of noise. For a recent experiment at the SLAC National Accelerator Laboratory to test a radar-based detection scheme for high energy neutrino cascades, such a sensitive analysis was employed to dig down into a spurious background and extract a signal. This analysis...
The Gambit collaboration is a new effort in the world of global BSM fitting -- the combination of the largest possible set of observational data from across particle, astro, and nuclear physics to gain a synoptic view of what experimental data has to say about models of new physics. Using a newly constructed, open source code framework, Gambit have released several state-of-the-art scans of...
Data analysis based on forward simulation often require the use of a machine learning model for statistical inference of the parameters of interest.
Most of the time these learned model are trained to discriminate events between backgrounds and signals to produce a 1D score, which is used to select a relatively pure signal region.
The training of the model does not take into account the final...
A common goal in the search for new physics is the determination of sets of New Physics models, typically parametrized by a number of parameters such as masses or couplings, that are either compatible with the observed data or excluded by it, where the determination into which category a given model belong requires expensive computation of the expected signal. This problem may be abstracted...
The Belle II experiment is an e+e- collider experiment in Japan, which
begins its main physics run in early 2019. The clean environment of e+e-
collisions together with the unique event topology of Belle II, in which
an ฮฅ(4S) particle is produced and subsequently decays to a pair of B
mesons, allows a wide range of physics measurements to be performed
which are difficult or impossible at...
We investigate the problem of dark matter detection in emulsion detector. Previously we have shown, that it is very challenging but possible to use emulsion films of OPERA-like detector in SHiP experiment to separate electromagnetic showers from each other, thus hypothetically separating neutrino events from dark matter. In this study, we have investigated the possibility of usage of Target...
Ground-based $\gamma$-ray astronomy relies on reconstructing primary particles' properties from the measurement of the induced air showers. Currently, template fitting is the state-of-the-art method to reconstruct air showers. CNNs represent promising means to improve on this method in both, accuracy and computational cost. Promoted by the availability of inexpensive hardware and open-source...
In recent years, the astroparticle physics community has successfully adapted supervised learning algorithms for a wide range of tasks, including event reconstruction in cosmic ray observatories[1], photon identification at Cherenkov telescopes[2], and the extraction of gravitational wave signals from time traces[3]. In addition, first unsupervised learning approaches of generative models at...
From a breakthrough revolution, Deep Learning (DL) has grown to become a de-facto standard technique in the fields of artificial intelligence and computer vision. In particular Convolutional Neural Networks (CNNs) are shown to be a powerful DL technique to extract physics features from images: They were successfully applied to the data reconstruction and analysis of Liquid Argon Time...
PICO is a dark matter experiment using superheated bubble chamber technology. One of the main analysis challenges in PICO is to unambiguously distinguish between background events and nuclear recoil events from possible WIMP scatters. The conventional discriminator, acoustic parameter (AP), utilizes frequency analysis in Fourier space to compute the acoustic power, which is proven to be...
Deep learning architectures in particle physics are often strongly dependent on the order of their input variables. We present a two-stage deep learning architecture consisting of a network for sorting input objects and a subsequent network for data analysis. The sorting network (agent) is trained through reinforcement learning using feedback from the analysis network (environment). A tree...
Accurate particle identification (PID) is one of the most important aspects of the LHCb experiment. Modern machine learning techniques such as deep neural networks are efficiently applied to this problem and are integrated into the LHCb software. In this research, we discuss novel applications of neural network speed-up techniques to achieve faster PID in LHC upgrade conditions. We show that...
Using variational autoencoders trained on known physics processes, we develop a one-side p-value test to isolate previously unseen event topologies as outlier events. Since the autoencoder training does not depend on any specific new physics signature, the proposed procedure has a weak dependence on underlying assumptions about the nature of new physics. An event selection based on this...
We present recent work in deep learning for particle physics and cosmology at NERSC, the US Dept. of Energy mission HPC centre. We will describe activity in new methods and applications; distributed training across HPC resources; and plans for accelerated hardware for deep learning in NERSC-9 (Perlmutter) and beyond.
Some of the HEP methods and applications showcased include conditional...
The next generation of astronomical surveys will revolutionize our understanding of the Universe, raising unprecedented data challenges in the process. One of them is the impossibility to rely on human scanning for the identification of unusual/unpredicted astrophysical objects. Moreover, given that most of the available data will be in the form of photometric observations, such...
Although the standard model of particle physics is successful in describing physics as we know it, it is known to be incomplete. Many models have been developed to extend the standard model, none of which have been experimentally verified. One of the main hurdles in this effort is the dimensionality of these models, yielding problems in analysing, visualising and communicating results. Because...
The High-Luminosity upgrade of LHC (HL-LHC) is expected to deliver a total luminosity of 3000 fb$^{-1}$ to the general purpose experiments. This will allow the measurement of Standard Model processes with unprecedented precision, and will significantly increase the reach of searches for new physics. Higher data rates and increased radiation levels will require substantial upgrades to the...
Universal Quantum Computing may still be a few years away, but we have entered the Noisy Intermediate-Scale Quantum era which ranges from D-Wave commercial Quantum Annealers to a wide selection of gate-based quantum processor prototypes. These provide us with the opportunity to evaluate the potential of quantum computing for HEP applications.
We will present early results from the DOE HEP.QPR...