State-of-the-art prediction accuracy in jet tagging tasks is currently achieved by modern geometric deep learning architectures incorporating Lorentz group invariance, resulting in computationally costly parameterizations that are moreover complex and thus lack interpretability. To tackle this issue, we propose Boost Invariant Polynomials (BIPs) — a framework to construct highly efficient...
Motivated by the high computational costs of classical simulations, machine-learned generative models can be extremely useful in particle physics and elsewhere. They become especially attractive when surrogate models can efficiently learn the underlying distribution, such that a generated sample outperforms a training sample of limited size. This kind of GANplification has been observed for...
We propose Classifying Anomalies THrough Outer Density Estimation (CATHODE): A novel, completely data-driven and model-agnostic approach to search for resonant new physics with anomalous jet substructure at the LHC.
Training a conditional normalizing flow on kinematic and substructure variables in a sideband region, we acquire an approximation of their probability densities. We then...
The performance of constituent-based jet taggers for boosted top quarks reconstructed from Unified Flow Object jet input is presented. Several taggers which consider all of the information contained in the kinematic information of the jet constituents are tested, and compared to a tagger which relies on high-level summary quantities similar to the taggers used by ATLAS in Runs 1 and 2.
We investigate the pair-production of Right-Handed Neutrinos (RHNs) via a $B-L$ $Z'$ boson and present the sensitivity studies of the active-sterile neutrino mixing ($|V_{\mu N}|$) at the High-Luminosity run of the LHC (HL-LHC) and a future $pp$ collider (FCC-hh). We focus on RHN states with a mass of $10-70$ GeV which naturally results in displaced vertices for small $|V_{\mu N}|$. Being...
The large data rates at the LHC make it impossible to store every single observed interaction. Therefore we require an online trigger system to select relevant collisions. We propose an additional approach, where rather than compressing individual events, we compress the entire data set at once. We use a normalizing flow as a deep generative model to learn the probability density of the data...
Simulation is a key component of modern high energy physics experiments. However, producing simulated data with sufficient detail and in sufficient quantities places a significant strain on the available computing resources. With the increased simulation demands of the upcoming high luminosity phase of the LHC and future colliders expected to contribute to a major bottleneck, computationally...
Flavour tagging is a crucial component for the LHC physics program. The performance of the flavour-tagging algorithm is such that the statistical precision of the simulated samples is diluted when flavour tagging is applied in particular to many jets per event. Truth-flavour tagging is based on weighting jets according to their probability of being tagged and is an alternative approach that...
The Boosted Event Shape Tagger (BEST) is a boosted jet tagger that classifies large radius jets as originating from: Higgs, W, Z, top, bottom, or QCD. In BEST, jet constituents are boosted along the jet axis assuming 7 different mass hypotheses. In each frame, a series of Boosted Event Shape variables are calculated. These variables, along with jet kinematic information, are used as inputs to...
In high-energy particle physics, complex Monte Carlo simulations are needed to connect the theory to measurable quantities. Often, the significant computational cost of these programs becomes a bottleneck in physics analyses. In this contribution, we evaluate an approach based on a Deep Neural Network to reweight simulations to different models or model parameters, using the full kinematic...
In this talk, I will present the differential equation solver by using machine learning methods and apply it to solve the evolution equation in QCD as the proof-of-concept, such as the DGLAP equation. Moreover, I will use this method to study the medium-induced radiation loss by solving the time-dependent Schrodinger equation (TDSE) for the light-cone path integral (LCPI) approach, developed...
Reconstructing the type and energy of isolated pions from the ATLAS calorimeters is a key step in the hadronic reconstruction. The baseline methods for local hadronic calibration were optimized early in the lifetime of the ATLAS experiment. Recently, image-based deep learning techniques demonstrated significant improvements over the performance over these traditional techniques. This poster...
With current and future high-energy collider experiments' vast data collecting capabilities comes an increasing demand for computationally efficient simulations. Generative machine learning models allow fast event generation, yet so far are largely constrained to fixed data and detector geometries.
We introduce a novel generative machine learning setup for generation of permutation invariant...
A search for heavy resonances Y decaying into a Standard Model Higgs boson (H) and a new boson (X) is performed with proton-proton collision data with the ATLAS detector at the CERN Large Hadron Collider. The Physics channel where the Higgs decays into bb and the X to light quarks are considered, thus resulting in a fully hadronic final state. A two-dimensional phase space of XH mass versus X...
A search is made for a vector-like T quark decaying into a Higgs boson and a top quark in 13 TeV proton-proton collisions using the ATLAS detector at the Large Hadron Collider with a data sample corresponding to an integrated luminosity of 139 fb−1. The all-hadronic decay modes H→b¯b and t→bW→bq¯q′ are reconstructed as large-radius jets and identified using tagging algorithms. Improvements in...
The reconstruction and calibration of hadronic final states is an extremely challenging experimental aspect of measurements and searches at the LHC. This talk summarizes the latest results from ATLAS for jet reconstruction and calibration of Anti-kt R=0.4 jets. New approaches to jet inputs better utilize relationships between calorimeter and tracking information to significantly improve the...
The large volume of data collected at the LHC makes it challenging to maintain existing trigger schemes and save data for offline processing. As a result, it's becoming increasingly important to execute algorithms with greater selection capability online, utilizing low-latency devices with high parallelization. The implementation of deep learning algorithms on FPGAs could be a winning strategy...