Conveners
Track 2: Data Analysis - Algorithms and Tools: Parallel Session
- Sergei Gleyzer (University of Florida (US))
Track 2: Data Analysis - Algorithms and Tools: Parallel Session
- Toby Burnett (University of Washington)
Track 2: Data Analysis - Algorithms and Tools: Parallel Session
- Sergei Gleyzer (University of Florida (US))
Track 2: Data Analysis - Algorithms and Tools: Parallel Session
- Kyle Stuart Cranmer (New York University (US))
Track 2: Data Analysis - Algorithms and Tools: Parallel Session
- Sergei Gleyzer (University of Florida (US))
Track 2: Data Analysis - Algorithms and Tools: Parallel Session
- Axel Naumann (CERN)
Neural networks are going to be used in the pipelined first level trigger of the upgraded flavor physics experiment Belle II at the high luminosity B factory SuperKEKB in Tsukuba, Japan. A luminosity of $\mathcal{L} = 8 \times 10^{35}\,cm^{โ2} s^{โ1}$ is anticipated, 40 times larger than the world record reached with the predecessor KEKB. Background tracks, with vertices displaced along the...
Electron and photon triggers covering transverse energies from 5 GeV
to several TeV are essential for signal selection in a wide variety of
ATLAS physics analyses to study Standard Model processes and to search
for new phenomena. Final states including leptons and photons had, for
example, an important role in the discovery and measurement of the
Higgs boson. Dedicated triggers are also used...
The first implementation of Machine Learning inside a Level 1 trigger system at the LHC is presented. The Endcap Muon Track Finder at CMS uses Boosted Decision Trees to infer the momentum of muons based on 25 variables. All combinations of variables represented by 2^30 distinct patterns are evaluated using regression BDTs, whose output is stored in 2 GB look-up tables. These BDTs take...
The Liquid Argon Time Projection Chamber (LArTPC) is an exciting detector technology that is undergoing rapid development. Due to its high density, low diffusion, and excellent time and spatial resolutions, the LArTPC is particularly attractive for applications in neutrino physics and nucleon decay, and is chosen as the detector technology for the future Deep Underground Neutrino Experiment...
At the times when HEP computing needs were mainly fulfilled by mainframes, graphics solutions for event and detector visualizations were necessarily hardware as well as experiment specific and impossible to use anywhere outside of HEP community. A big move to commodity computing did not precipitate a corresponding move of graphics solutions to industry standard hardware and software. In this...
The Jiangmen Underground Neutrino Observatory (JUNO) is a multiple purpose neutrino experiment to determine neutrino mass hierarchy and precisely measure oscillation parameters. The experimental site is under a 286m mountain, and the detector will be at -480m depth. Twenty thousand ton liquid scintillator (LS) is contained in a spherical container of radius of 17.7 m as the central detector...
We introduce the first use of deep neural network-based generative modeling for high energy physics (HEP). Our novel Generative Adversarial Network (GAN) architecture is able cope with the key challenges in HEP images, including sparsity and a large dynamic range. For example, our Location-Aware Generative Adversarial Network learns to produce realistic radiation patterns inside high energy...
Tools such as GEANT can simulate volumetric energy deposition of particles down to a certain energy and length scales.
However, fine-grained effects such as material imperfections, low-energy charge diffusion, noise, and read-out can be difficult to model exactly and may lead to systematic differences between the simulation and the physical detector.
In this work, we introduce a...
The CMS experiment is in the process of designing a complete new tracker for the high-luminosity phase of LHC. The latest results of the future tracking performance of CMS will be shown as well as the latest developments exploiting the new outer tracker possibilities. In fact, in order to allow for a track trigger, the modules of the new outer tracker will produce stubs or vector hits...
There has been considerable recent activity applying deep convolutional neural nets (CNNs) to data from particle physics experiments. Current approaches on ATLAS/CMS have largely focussed on a subset of the calorimeter, and for identifying objects or particular particle types. We explore approaches that use the entire calorimeter, combined with track information, for directly conducting...
Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Examples include the Intel Xeon Phi, GPGPUs, and similar technologies. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware....
Many simultaneous proton-proton collisions occur in each bunch crossing at the Large Hadron Collider (LHC). However, most of the time only one of these collisions is interesting and the rest are a source of noise (pileup). Several recent pileup mitigation techniques are able to significantly reduce the impact of pileup on a wide set of interesting observables. Using state-of-the-art machine...
Deep learning for jet-tagging and jet calibration have recently been increasingly explored. For jet-flavor tagging CMSโs most performant tagger for 2016 data (DeepCSV) was based on a deep neural network. The input was a set of standard tagging variables of pre-selected objects. For 2017 improved algorithms are implemented that start from particle candidates without much preselection, i.e. much...
The separation of b-quark initiated jets from those coming from lighter quark flavours (b-tagging) is a fundamental tool for the ATLAS physics program at the CERN Large Hadron Collider. The most powerful b-tagging algorithms combine information from low-level taggers exploiting reconstructed track and vertex information using a multivariate classifier. The potential of modern Machine Learning...
Charged particle reconstruction in dense environments, such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms, such as the combinatorial Kalman Filter, have been used with great success in HEP experiments for years. However, these state-of-the-art techniques are inherently sequential and scale...
An essential part of new physics searches at the Large Hadron Collider
at CERN involves event classification, or distinguishing signal decays
from potentially many background sources. Traditional techniques have
relied on reconstructing particle candidates and their physical
attributes from raw sensor data. However, such reconstructed data are
the result of a potentially lossy process of...
Experimental Particle Physics has been at the forefront of analyzing the worldโs largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called โBig Dataโ technologies have emerged from industry and open source projects to support...
The GooFit package provides physicists a simple, familiar syntax for manipulating probability density functions and performing fits, but is highly optimized for data analysis on NVIDIA GPUs and multithreaded CPU backends. GooFit is being updated to version 2.0, bringing a host of new features. A completely revamped and redesigned build system makes GooFit easier to install, develop with, and...
There are numerous approaches to building analysis applications across the high-energy physics community. Among them are Python-based, or at least Python-driven, analysis workflows. We aim to ease the adoption of a Python-based analysis toolkit by making it easier for non-expert users to gain access to Python tools for scientific analysis. Experimental software distributions and individual...
Geant4 is the leading detector simulation toolkit used in high energy physics to design
detectors and to optimize calibration and reconstruction software. It employs a set of carefully validated physics models to simulate interactions of particles with matter across a wide range of interaction energies. These models, especially the hadronic ones, rely largely on directly measured...
ROOT https://root.cern is evolving along several new paths. At the same time it is reconsidering existing parts. This presentation will try to predict where ROOT will be in three years from now: the main themes of development and where we are already now, the big open questions as well as some of the questions that we didn't even ask yet. The oral presentation will cover the new graphics and...
The bright future of particle physics at the Energy and Intensity frontiers poses
exciting challenges to the scientific software community. The traditional strategies
for processing and analysing data are evolving in order to cope with the ever increasing
complexity and size of the datasets.
The traditional strategies for processing and analysing data are evolving in order to (i)...
The result of many machine learning algorithms are computational complex models. And further growth in the quality of the such models usually leads to a deterioration in the applying times. However, such high quality models are desirable to be used in the conditions of limited resources (memory or cpu time).
This article discusses how to trade the quality of the model for the speed of its...
The Belle II experiment is expected to start taking data in early 2018. Precision measurements of rare decays are a key part of the Belle II physics program and machine learning algorithms have played an important role in the measurement of small signals in high energy physics over the past several years. The authors report on the application of deep learning to the analysis of the B to K*...
Liquid argon time projection chambers (LArTPCs) are an innovative technology used in neutrino physics measurements that can also be utilized in establishing lifetimes on several partial lifetimes for proton and neutron decay. Current analyses suffer from low efficiencies and purities that arise from the misidentification of nucleon decay final states as background processes and vice-versa....
Starting with Run II, future development projects for the Large Hadron Collider will constantly bring nominal luminosity increase, with the ultimate goal of reaching a peak luminosity of $5 ยท 10^{34} cm^{โ2}s^{โ1}$ for ATLAS and CMS experiments planned for the High Luminosity LHC (HL-LHC) upgrade. This rise in luminosity will directly result in an increased number of simultaneous proton...
Reconstruction and identification in calorimeters of modern High Energy Physics experiments is a complicated task. Solutions are usually driven by a priori knowledge about expected properties of reconstructed objects. Such an approach is also used to distinguish single photons in the electromagnetic calorimeter of the LHCb detector on LHC from overlapping photons produced from high momentum...
MicroBooNE is a liquid argon time projection chamber (LArTPC) neutrino
experiment that is currently running in the Booster Neutrino Beam at Fermilab.
LArTPC technology allows for high-resolution, three-dimensional representations
of neutrino interactions. A wide variety of software tools for automated
reconstruction and selection of particle tracks in LArTPCs are actively being
developed....
Latest developments in many research fields indicate that deep learning methods have the potential to significantly improve physics analyses.
They not only enhance the performance of existing algorithms but also pave the way for new measurement techniques that are not possible with conventional methods.
As the computation is highly resource-intensive both dedicated hardware and software are...
By colliding protons and examining the particle emitted from the collisions, the Large Hadron Collider aims to study the interactions of quarks and gluons at the highest energies accessible in a controlled experimental way. In such collisions, the types of interactions that occur may extend beyond those encompassed by the Standard Model of particle physics. Such interactions typically occur...
The ALPHA experiment at CERN is designed to produce, trap and study antihydrogen, which is the antimatter counterpart of the hydrogen atom. Since hydrogen is one of the best studied physical system, both theoretically and experimentally, experiments on antihydrogen permit a precise direct comparison between matter and antimatter. Our basic technique consists of driving an antihydrogen...
We study the ability of different deep neural network architectures to learn various relativistic invariants and other commonly-used variables, such as the transverse momentum of a system of particles, from the four-vectors of objects in an event. This information can help guide the optimal design of networks for solving regression problems, such as trying to infer the masses of unstable...
The LHC data analysis software used in order to derive and publish experimental results is an important asset that is necessary to preserve in order to fully exploit the scientific potential of a given measurement. Among others, important use cases of analysis preservation are the reproducibility of the original results and the reusability of the analysis procedure in the context of new...
Deep Convolutional Neural Networks (CNNs) have been widely used in the field of computer vision to solve complex problems in image recognition and analysis. Recently, we have applied a Deep Convolutional Visual Network (CVN), to identify neutrino events in the NOvA experiment. NOvA is a long baseline neutrino experiment whose main goal is the measurement of neutrino oscillations. It relies on...