We present a customized neural network architecture for both, slim and fat jet tagging. It is based on the idea to keep the concept of physics objects, like particle flow particles, as a core element of the network architecture. The deep learning algorithm works for most of the common jet classes, i.e. b, c, usd and gluon jets for slim jets and W, Z, H, QCD and top classes for fat jets. The...
At HL-LHC, the seven-fold increase of multiplicity wrt 2018 conditions poses a severe challenge to ATLAS and CMS tracking experiments. Both experiment are revamping their tracking detector, and are optimizing their software. But are there not new algorithms developed outside HEP which could be invoked: for example MCTS, LSTM, clustering, CNN, geometric deep learning and more?
We organize on...
Collider will constantly bring nominal luminosity increase, with the ultimate goal of reaching a peak luminosity of $5 · 10^{34} cm^{−2} s^{−1}$ for ATLAS and CMS experiments planned for the High Luminosity LHC (HL-LHC) upgrade. This rise in luminosity will directly result in an increased number of simultaneous proton collisions (pileup), up to 200, that will pose new challenges for the CMS...
This is a merger of three individual contributions:
- https://indico.cern.ch/event/668017/contributions/2947026/
- https://indico.cern.ch/event/668017/contributions/2947027/
- https://indico.cern.ch/event/668017/contributions/2947028/
Simulating detector response for the Monte Carlo-generated
collisions is a key component of every high-energy physics experiment.
The methods used currently for this purpose provide high-fidelity re-
sults, but their precision comes at a price of high computational cost.
In this work, we present a proof-of-concept solution for simulating the
responses of detector clusters to particle...
In this contribution, we present a method for tuning perturbative parameters in Monte Carlo simulation using a classifier loss in high dimensions. We use an LSTM trained on the radiation pattern inside jets to learn the parameters of the final state shower in the Pythia Monte Carlo generator. This represents a step forward compared to unidimensional distributional template-matching methods.
Machine Learning techniques have been used in different applications by the HEP community: in this talk, we discuss the case of detector simulation. The need for simulated events, expected in the future for LHC experiments and their High Luminosity upgrades, is increasing dramatically and requires new fast simulation solutions. We will describe an R&D activity, aimed at providing a...
Applications of machine learning tools to problems of physical interest are often criticized for producing sensitivity at the expense of transparency. In this talk, I explore a procedure for identifying combinations of variables -- aided by physical intuition -- that can discriminate signal from background. Weights are introduced to smooth away the features in a given variable(s). New networks...
The use of neural networks in physics analyses poses new challenges for the estimation of systematic uncertainties. Since the key to a proper estimation of uncertainties is the precise understanding of the algorithm, novel methods for the detailed study of the trained neural network are valuable.
This talk presents an approach to identify those characteristics of the neural network inputs that...
The aim of the studies presented is to improve the performance of jet flavour tagging on real data while still exploiting a simulated dataset for the learning of the main classification task. In the presentation we explore “off the shelf” domain adaptation techniques as well as customised additions to them. The latter improves the calibration of the tagger, potentially leading to smaller...
Particle identification (PID) plays a crucial role in LHCb analyses. Combining information from LHCb subdetectors allows one to distinguish between various species of long-lived charged and neutral particles. PID performance directly affects the sensitivity of most LHCb measurements. Advanced multivariate approaches are used at LHCb to obtain the best PID performance and control systematic...
In this presentation we will detail the evolution of the DeepJet python environment. Initially envisaged to support the development of the namesake jet flavour tagger in CMS, DeepJet has grown to encompass multiple purposes within the collaboration. The presentation will describe the major features the environment sports: simple out-of-memory training with a multi-treaded approach to maximally...
The analysis of top-quark pair associated Higgs boson production enables a direct measurement of the top-Higgs Yukawa coupling. In ttH (H→bb) analyses, multiple event categories are commonly used in order to simultaneously constrain signal and background contributions during a fit to data. A typical approach is to categorize events according to both their jet and b-tag multiplicities. The...
High energy collider experiments produce several petabytes of data every year. Given the magnitude and complexity of the raw data, machine learning algorithms provide the best available platform to transform and analyse these data to obtain valuable insights to understand Standard Model and Beyond Standard Model theories. These collider experiments produce both quark and gluon initiated...
Vidyo contribution
Based on the natural tree-like structure of jet sequential clustering, the recursive neural networks (RecNNs) embed jet clustering history recursively as in natural language processing. We explore the performance of RecNN in quark/gluon discrimination. The results show that RecNNs work better than the baseline BDT by a few percent in gluon rejection at the working point of...
Complex machine learning tools, such as deep neural networks and gradient boosting algorithms, are increasingly being used to construct powerful discriminative features for High Energy Physics analyses. These methods are typically trained with simulated or auxiliary data samples by optimising some classification or regression surrogate objective. The learned feature representations are then...
Vidyo contribution
We present a technique to perform classification of decays that exhibit decay chains involving a variable number of particles, which include a broad class of $B$ meson decays sensitive to new physics. The utility of such decays as a probe of the Standard Model is dependent upon accurate determination of the decay rate, which is challenged by the combinatorial background...
Data collection rates in high energy physics (HEP), particularly those at the Large Hadron Collider (LHC) are a continuing challenge and require large amounts of computing power to handle. For example, at LHCb an event rate of 1 MHz is processed in a software-based trigger. The purpose of this trigger is to reduce the output data rate to manageable levels, which amounts to a reduction from 60...
The increased instantaneous luminosity at HL-LHC will raise the computing requirements for event reconstruction and analysis for current LHC-based experiments, hence limiting the available resources for the simulation of particles traversing matter. Developments of the performance of state-of-the-art simulation frameworks such as Geant4 are proceeding but are unlikely to fully compensate for...
Developing and building an analysis in high energy particle physics requires a large amount of simulated events. Simulations at the LHC are usually complex and computationally intensive due to sophisticated detector architectures. In this context, Generative Adversarial Networks (GANs) have recently caught a wide interest. GANs can learn to generate complex data distributions and produce...
The LHCb experiment at CERN operates a high precision and robust tracking system to reach its physics goals, including precise measurements of CP-violation phenomena in the heavy flavour quark sector and searches for New Physics beyond the Standard Model. Since Run2, the experiment has put in place a new trigger strategy with a real-time reconstruction, alignment and calibration, imposing...
Machine learning has been an attractive topic in high-energy physics field for many years. For example, machine learning algorithms devoted to the reconstruction of particle tracks or jets in high energy physics experiments. EOS is an open source parallel distributed file system. It has been generally used in large scale cluster computing for both physics and user use cases at IHEP, like...
Leveraging on our previous work on developing DNN-based classification models for Higss events [1], we turn to CNN-based classification models for muon events. Using Intel Knights Landing (KNL) processors, we present performance metrics on training convolutional neural networks (CNNs) on multiple KNL computing nodes for the task of muon identification (i.e "high Pt" or "low Pt"). This work is...