There has been significant interest and development in the use of graph neural networks (GNNs) for jet tagging applications. These generally provide better accuracy than CNN and energy flow algorithms by exploiting a range of GNN mechanisms, such as dynamic graph construction, equivariance, attention, and large parameterizations. In this work, we present the first apples-to-apples exploration...
Particle collider experiments generate huge volumes of complex data, and a mix of experience and tenacity is usually required to understand it at the detector and reconstruction level. Event displays provide a useful visual representation of both raw and reconstructed data that can be used to accelerate this learning process towards physics results. They are also used to verify expected...
We use convolutional neural networks (CNNs) to analyze monoscopic and stereoscopic images of extensive air showers registered by Cherenkov telescopes of the TAIGA experiment. The networks are trained and evaluated on Monte-Carlo simulated images to identify the type of the primary particle and to estimate the energy of the gamma rays. We compare the performance of the networks trained on...
The inner tracking system of the CMS experiment, which comprise of Silicon Pixel and Silicon Strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to reconstruct the primary and secondary vertices. The movements of the different substructures of the tracker detectors driven by the operating conditions during data taking, require to regularly...
The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL).
These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material (beampipe, tracker...
The higher LHC luminosity expected in Run 3 (2022+) and the consequently larger number of simultaneous proton-proton collisions (pileup) per event pose significant challenges for CMS event reconstruction. This is particularly important for event filtering at the CMS High Level Trigger (HLT), where complex reconstruction algorithms must be executed within a strict time budget.
This problem...
In recent years, a correspondence has been established between the appropriate asymptotics of deep neural networks (DNNs), including convolutional ones (CNNs), and the machine learning methods based on Gaussian processes (GPs). The ultimate goal of establishing such interrelations is to achieve a better theoretical understanding of various methods of machine learning (ML) and their...
As the CMS detector is getting ready for data-taking in 2021 and beyond, the detector is expected to deliver an ever-increasing amount of data. To ensure that the data recorded from the detector has the best quality possible for physics analyses, CMS Collaboration has dedicated Data Quality Monitoring (DQM) and Data Certification (DC) working groups. These working groups are made of human...
The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric by which the robustness of the model can be measured. We present a new method to constrain the Lipschitz constant of dense deep learning models that can also be generalized to other architectures. The method relies on a simple weight normalization scheme during training...
We present the first application of scalable deep learning with a high-performance computer (HPC) to physics analysis using the CMS simulation data with 13 TeV LHC proton-proton collision. We build a convolutional neural network (CNN) model which takes low-level information as images considering the geometry of the CMS detector. The CNN model is implemented to discriminate R-parity violating...
The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because “training a neural network” equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can...
A major challenge of the high-luminosity upgrade of the CERN LHC is to single out the primary interaction vertex of the hard scattering process from the expected 200 pileup interactions that will occur each bunch crossing. To meet this challenge, the upgrade of the CMS experiment comprises a complete replacement of the silicon tracker that will allow for the first time to perform the...
The Large Hadron Collider (LHC) will enter a new era for data acquisition by 2026 within the High-Luminosity Large Hadron Collider (HL-LHC) program, where the LHC will increase the proton-proton collisions up to unprecedented levels. This increase will imply a factor 10 in terms of luminosity as compared to the current values, having an impact in the way the experimental data is stored and...
The wide angular distribution of the incoming cosmic ray muons in connection with either incident angle or azimuthal angle is a challenging trait led to a drastic particle loss in the course of parametric computations from the GEANT4 simulations since the tomographic configurations as well as the target geometries also influence the processable number of the detected particles apart from the...
In this talk we present a novel method to reconstruct the kinematics of neutral-current deep inelastic scattering (DIS) using a deep neural network (DNN). Unlike traditional methods, it exploits the full kinematic information of both the scattered electron and the hadronic-final state, and it accounts for QED radiation by identifying events with radiated photons and event-level momentum...
The triggerless readout of data corresponding to a 30 MHz event rate at the upgraded LHCb experiment together with a software-only High Level Trigger will enable the highest possible flexibility for trigger selections. During the first stage (HLT1), track reconstruction and vertex fitting for charged particles enable a broad and efficient selection process to reduce the event rate to 1 MHz....
The emerging applications of cosmic ray muon tomography lead to a significant rise in the utilization of the cosmic particle generators, e.g. CRY, CORSIKA, or CMSCGEN, where the fundamental parameters such as the energy spectrum and the angular distribution about the generated muons are represented in the continuous forms routinely governed implicitly by the probability density functions over...