Amplitude level evolution has become a new theoretical paradigm to analyze parton shower algorithms which are at the heart of multi-purpose event generator simulations used for particle collider experiments. It can also be implemented as a numerical algorithm in its own right to perform resummation of non-global observables beyond the leading colour approximation, leading to a new kind of...
Problematic I/O pattern is the major cause of low efficiency HEP jobs. When the computing cluster is partially occupied by jobs with problematical I/O patterns, the overall CPU efficiency will dramatically drop down. In a cluster with thousands of users, locating the source of an anomalous workload is not an easy task. Automatic anomaly detection of I/O behavior can largely alleviate the...
In this talk we present a neural network based model to emulate matrix
elements. This model improves on existing methods by taking advantage of the known
factorisation properties of matrix elements to separate out the divergent regions.
In doing so the neural network learns about the factorisation property in singular limits, meaning we can control the behaviour of simulated matrix elements...
We present a machine-learning based strategy to detect data departures from a given reference model, with no prior bias on the nature of the new physics responsible for the discrepancy. The main idea behind this method is to build the likelihood-ratio hypothesis test by directly translating the problem of maximizing a likelihood-ratio into the minimization of a loss function. A neural network...
Future HEP experiments will have ever higher read-out rate. It is then essential to explore new hardware paradigms for large scale computations. In this work we consider the Optical Processing Unit (OPU) from [LightOn][1], which is an optical device allowing to compute in a fast analog way the multiplication of an input vector of size 1 million by a 1 million x 1 million fixed random matrix,...
We present a decisive milestone in the challenging event reconstruction of the CMS High Granularity Calorimeter (HGCAL): the deployment to the official CMS software of the GPU version of the clustering algorithm (CLUE). The direct GPU linkage of CLUE to the preceding energy deposits calibration step is thus made possible, avoiding data transfers between host and device, further extending the...
Photometric data-driven classification of supernovae is one of the fundamental problems in astronomy. Recent studies have demonstrated the superior quality of solutions based on various machine learning models. These models learn to classify supernova types using their light curves as inputs. Preprocessing of these curves is a crucial step that significantly affects the final quality. In this...
The demand for precision predictions in the field of high energy physics has increased tremendously over the recent years. Its importance is visible in the light of current experimental efforts to test the predictive power of the Standard Model of particle physics (SM) to a never before seen accuracy. Thus, advanced computer software is a key technology to enable phenomenological computations...
Over the last ten years, the popularity of Machine Learning (ML) has grown exponentially in all scientific fields, included particle physics. Industry has also developed new powerful tools that, imported into academia, could revolutionise research. One recent industry development that has not yet come to the attention of the particle physics community is Collaborative Learning (CL), a...
Scattering amplitudes in perturbative quantum field theory exhibit a rich structure of zeros, poles and branch cuts which are best understood in complexified momentum space. It has been recently shown that leveraging this information can significantly simplify both analytical reconstruction and final expressions for the rational coefficients of transcendental functions appearing in...
The Worldwide LHC Computing Grid (WLCG) is the infrastructure enabling the storage and pro-cessing of the large amount of data generated by the LHC experiments, and in particular the ALICE experiment among them. With the foreseen increase in the computing requirements of the future HighLuminosity LHC experiments, a data placement strategy which increases the efficiency of the WLCG computing...
Programmers using the C++ programming language are increasingly taught to manage memory implicitly through containers provided by the C++ standard library. However, many heterogeneous programming platforms require explicit allocation and deallocation of memory, which is often discouraged in “best practice” C++ programming, and this discrepancy in memory management strategies can be daunting...
In this talk I will present REvolver, a c++ library for renormalization group evolution and automatic flavor matching of the QCD coupling and quark masses, as well as precise conversion between various quark mass renormalization schemes. The library systematically accounts for the renormalization group evolution of low-scale short-distance masses which depend linearly on the renormalization...
The Self-Organizing-Map (SOM) is a widely used neural
net for data analysis, dimension reduction and
clustering. It has yet to find use in high energy
particle physics. This paper discusses two
applications of SOM in particle physics. First, we were
able to obtain high separation of rare processes in
regions of the dimensionally reduced representation.
Second, we obtained Monte Carlo...
In the domain of high-energy physics (HEP), query languages in general and SQL in particular have found limited acceptance. This is surprising since HEP data analysis matches the SQL model well: the data is fully structured and queried using mostly standard operators. To gain insights on why this is the case, we perform a comprehensive analysis of six diverse, general-purpose data processing...
pySecDec is a tool for Monte Carlo integration of multiloop Feynman integrals (or parametric integrals in general), using the sector decomposition strategy. Its latest release contains two major features: the ability to expand integrals in kinematic limits using expansion by regions approach, and the ability to optimize the integration of weighted sums of integrals maximizing the obtained...
We investigate supervised and unsupervised quantum machine learning algorithms in the context of typical data analyses at the LHC. To deal with constraints on the problem size, dictated by limitations on the quantum hardware, we concatenate the quantum algorithms to the encoder of a classic autoencoder, used for dimensional reduction. We show results for a quantum classifier and a quantum...
Non perturbative QED is used to predict beam backgrounds at the interaction point of colliders, in calculations of Schwinger pair creation and in precision QED tests with ultra-intense lasers.
In order to predict these phenomena, custom built monte carlo event generators based on a suitable non perturbative theory have to be developed. One such suitable theory uses the Furry Interaction...
From 2022 onward, the upgraded LHCb experiment will use a triggerless readout system collecting data at an event rate of 30 MHz. A software-only High Level Trigger will enable unprecedented flexibility for trigger selections. During the first stage (HLT1), a sub-set of the full offline track reconstruction for charged particles is run to select particles of interest based on single or...
We introduce the differentiable simulator MadJax, an implementation of the general purpose matrix element generator Madgraph integrated within the Jax differentiable programming framework in Python. Integration is performed during automated matrix element code generation and subsequently enables automatic differentiation through leading order matrix element calculations. Madjax thus...
We present the software framework underlying the NNPDF4.0 global determination of parton distribution functions (PDFs). The code is released under an open source licence and is accompanied by extensive documentation and examples. The code base is composed by a PDF fitting package, tools to handle experimental data and to efficiently compare it to theoretical predictions, and a versatile...
At the start of the upcoming LHC Run-3, CMS will deploy a heterogeneous High Level Trigger farm composed of x86 CPUs and NVIDIA GPUs. In order to guarantee that the HLT can run on machines without any GPU accelerators - for example as part of the large scale Monte Carlo production running on the grid, or when individual developers need to optimise specific triggers - the HLT reconstruction has...
The increasing luminosities of future data taking at Large Hadron Collider and next generation collider experiments require an unprecedented amount of simulated events to be produced. Such large scale productions demand a significant amount of valuable computing resources. This brings a demand to use new approaches to event generation and simulation of detector responses. In this talk, we...
Generative models (GM) are powerful tools to help validate theories by reducing the computation time of Monte Carlo (MC) simulations. GMs can learn expensive MC calculations and generalize to similar situations. In this work, we propose comparing a classical generative adversarial network (GAN) approach with a Born machine, both in his discrete (QCBM) and continuous (CVBM) form while...
After the Phase II Upgrade of the LHC, expected for the period between 2025-26, the average
number of collisions per bunch crossing at the LHC will increase from the Run-2 average value
of 36 to a maximum of 200 pile-up proton-proton interactions per bunch crossing. The ATLAS
detector will also undergo a major upgrade programme to be able to operate it in such a harsh
conditions with the...
We present Qibo, a new open-source framework for fast evaluation of quantum circuits and adiabatic evolution which takes full advantage of hardware accelerators, quantum hardware calibration and control, and large codebase of algorithms for applications in HEP and beyond. The growing interest in quantum computing and the recent developments of quantum hardware devices motivates the development...
AtlFast3 is the next generation of high precision fast simulation in ATLAS. It is being deployed by the collaboration and will replace AtlFastII, the fast simulation tool that was successfully used until now. AtlFast3 combines two Fast Calorimeter Simulations tools; a parameterization-based approach and a machine-learning based tool exploiting Generative Adversarial Networks (GANs). AtlFast3...
We compute the coefficients of the perturbative expansions of the plaquette,
and of the self-energy of static sources in the triplet and octet representation,
up to very high orders in perturbation theory. We use numerical sthocastic
perturbation theory and lattice regularization. We explore if the results
obtained comply with expectations from renormalon dominance, and what
they may say...
The z-vertex track trigger estimates the collision origin in the Belle II experiment using neural networks to reduce the background. The main part is a pre-trained multilayer perceptron. The task of this perceptron is to estimate the z-vertex of the collision to suppress background from outside the interaction point. For this, a low latency real-time FPGA implementation is needed. We present...
SND@LHC is a newly approved detector under construction at the LHC, aimed at studying the interactions of neutrinos of all flavours produced by proton-proton collisions at the LHC. The energy range under study, few hundreds MeVs up to about 5 TeVs, is currently unexplored. In particular, electron neutrino and tau neutrino cross sections are unknown in that energy range, whereas muon neutrino...
Modern machine learning methods offer great potential for increasing the efficiency of Monte Carlo event generators. We present the latest developments in the context of the event generation framework SHERPA. These include phase space sampling using normalizing flows and a new unweighting procedure based on neural network surrogates for the full matrix elements. We discuss corresponding...
Several online and offline applications in high-energy physics have benefitted from running on graphics processing units (GPUs), taking advantage of their processing model. To date, however, general HEP particle transport simulation is not one of them, due to difficulties in mapping the complexity of its components and workflow to the GPU’s massive parallelism features. Deep code stacks, with...
Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of AD is guaranteed to be more precise than the numerical alternative and have at most a constant...
Normalizing Flows (NFs) are emerging as a powerful brand of generative models, as they not only allow for efficient sampling, but also deliver density estimations by construction. They are of great potential usage in High Energy Physics (HEP), where we unavoidably deal with complex high dimensional data and probability distributions are everyday’s meal. However, in order to fully leverage the...
The LUXE experiment (LASER Und XFEL Experiment) is a new experiment in planning at DESY Hamburg that will study Quantum Electrodynamics (QED) at the strong-field frontier. In this regime, QED is non-perturbative. This manifests itself in the creation of physical electron-positron pairs from the QED vacuum. LUXE intends to measure the positron production rate in this unprecedented regime by...
We present results from a stand-alone simulation of electron single coulomb scattering as implemented completely on an FPGA architecture and compared with an identical simulation on a standard CPU. FPGA architectures offer unprecedented speed-up capability for Monte Carlo simulations, however with the caveats of lengthy development cycles and resource limitation particularly in terms of...
The Super Tau Charm Facility (STCF) is a high-luminosity electron–positron
collider proposed in China, for the study of charm and tau physics. The Offline Software of Super Tau Charm Facility (OSCAR) is designed and developed
based on SNiPER, a lightweight common framework for HEP experiments. Several state-of-art software and tools in the HEP community are adopted, such as
the Detector...
Phenomenological studies of high-multiplicity scattering processes at collider experiments present a substantial theoretical challenge and are increasingly important ingredients in experimental measurements. We investigate the use of neural networks to approximate matrix elements for these processes, studying the case of loop-induced diphoton production through gluon fusion. We train neural...
To sustain the harsher conditions of the high-luminosity LHC, the CMS collaboration is designing a novel endcap calorimeter system. The new calorimeter will predominantly use silicon sensors to achieve sufficient radiation tolerance and will maintain highly-granular information in the readout to help mitigate the effects of pileup. In regions characterized by lower radiation levels, small...
The JUNO experiment is being built mainly to determine the neutrino mass hierarchy by detecting neutrinos generated in the Yangjiang and Taishan nuclear plants in southern China. The detector will record 2 PB raw data every year, but each day it can only collect about 60 neutrino events scattered among huge background events. Selection of extremely sparse neutrino events brings a big challenge...
In this talk, I present the computation of the two-loop helicity amplitudes for Higgs boson production in association with a bottom quark pair. I give an overview of the method and describe how computational bottlenecks can be overcome by using finite field reconstruction to obtain analytic expressions from numerical evaluations. I also show how the method of differential equations allows us...
During the LHC Run 3 the ALICE online computing farm will process up to 50 times more Pb-Pb events per second than in Run 2. The implied computing resource scaling requires a shift in the approach that comprises the extensive usage of Graphics Processing Units (GPU) for the processing. We will give an overview of the state of the art for the data reconstruction on GPUs in ALICE, with...
We propose a novel method for the elimination of negative Monte Carlo event
weights. The method is process-agnostic, independent of any analysis, and preserves all physical observables. We demonstrate the overall performance and systematic improvement with increasing event sample size, based on predictions for the production of a W boson with two jets calculated at next-to-leading order...
The GeoModel toolkit is an open-source suite of standalone tools that empowers the user with lightweight tools to describe, visualize, test, and debug detector descriptions and geometries for HEP standalone studies and experiments. GeoModel has been designed with independence and responsiveness in mind and offers a development environment free of other large HEP tools and frameworks, and with...
A detailed geometry description is essential to any high quality track reconstruction application. In current C++ based track reconstruction software libraries this is often achieved by an object oriented, polymorphic geometry description that implements different shapes and objects by extending a common base class. Such a design, however, has been shown to be problematic when attempting to...
We present results for Higgs boson pair production in gluon fusion including both, NLO (2-loop) QCD corrections with full top quark mass dependence as well as anomalous couplings related to operators describing effects of physics beyond the Standard Model.
The latter can be realized in non-linear (HEFT) or linear (SMEFT) Effective Field Theory frameworks.
We show results for both and discuss...
In view of the null results (so far) in the numerous channel-by-channel searches for new particles at the LHC, it becomes increasingly relevant to change perspective and attempt a more global approach to finding out where BSM physics may hide. To this end, we developed a novel statistical learning algorithm that is capable of identifying potential dispersed signals in the slew of published LHC...
The performance of I/O intensive applications is largely determined by the organization of data and the associated insertion/extraction techniques. In this paper we present the design and implementation of an application that is targeted at managing data received (up to ~150 Gb/s payload throughput) into host DRAM, buffering data for several seconds, matched with the DRAM size, before being...
The analysis of high-frequency financial trading data faces similar problems as High Energy Physics (HEP) analysis. The data is noisy, irregular in shape, and large in size. Recent research on the intra-day behaviour of financial markets shows a lack of tools specialized for finance data, and describes this problem as a computational burden. In contrary to HEP data, finance data consists of...
We present two applications of declarative interfaces for HEP data analysis allowing users to avoid writing event loops that simplify code and enable performance improvements to be decoupled from analysis development. One example is FuncADL, an analysis description language inspired by functional programming developed using Python as a host language. In addition to providing a declarative,...
FeynCalc is esteemed by many particle theorists as a very
useful tool for tackling symbolic Feynman diagram calculations
with a great amount of transparency and flexibility.
While the program enjoys an excellent reputation
when it comes to tree level and 1-loop calculations,
the usefulness of FeynCalc in multi-loop projects is
often doubted by the practitioners.
In this talk I will...
The High Luminosity upgrade to the LHC, which aims for a ten-fold increase in the luminosity of proton-proton collisions at an energy of 14 TeV, is expected to start operation in 2028/29, and will deliver an unprecedented volume of scientific data at the multi-exabyte scale. This amount of data has to be stored and the corresponding storage system must ensure fast and reliable data delivery...
The ALICE experiment at the CERN LHC (Large Hadron Collider) is undertaking a major upgrade during the LHC Long Shutdown 2 in 2019-2021, which includes a new computing system called O2 (Online-Offline). The raw data input from the ALICE detectors will increase a hundredfold, up to 3.5 TB/s. By reconstructing the data online, it will be possible to compress the data stream down to 100 GB/s...
I provide a perspective on the development of quantum computing for data science, including a dive into state-of-the-art for both hardware and algorithms and the potential for quantum machine learning.
Query languages for High Energy Physics (HEP) are an ever present topic within the field. A query language that can efficiently represent the nested data structures that encode the statistical and physical meaning of HEP data will help analysts by ensuring their code is more clear and pertinent. As the result of a multi-year effort to develop an in-memory columnar representation of high energy...
We present the mixed QCD-EW two-loop virtual amplitudes for the neutral current Drell-Yan production. The evaluation of the two-loop amplitudes is one of the bottlenecks for the complete calculation of the NNLO mixed QCD-EW corrections. We present the computational details, especially the evaluation of all the relevant two-loop Feynman integrals using analytical and semi-analytical methods. We...
Awkward Array 0.x was written entirely in Python, and Awkward Array 1.x was a fresh rewrite with a C++ core and a Python interface. Ironically, the Awkward Array 2.x project is translating most of that core back into Python (leaving the interface untouched). This is because we discovered surprising and subtle issues in Python-C++ integration that can be avoided with a more minimal coupling: we...
The AI for Experimental Controls project at Jefferson Lab is developing an AI system to control and calibrate a large drift chamber system in near-real time. The AI system will monitor environmental variables and beam conditions to recommend new high voltage settings that maintain consistent dE/dx gain and optimal resolution throughout the experiment. At present, calibrations are performed...
We present an application of major new features of the program pySecDec, which is a program to calculate parametric integrals, in particular multi-loop integrals, numerically.
One important new feature is the ability to integrate weighted sums of integrals in a way which is optimised to reach a given accuracy goal on the sums rather than on the individual integrals, another one is the option...
Jiangmen Underground Neutrino Observatory (JUNO), located at the southern part of China, will be the world’s largest liquid scintillator(LS) detector. Equipped with 20 kton LS, 17623 20-inch PMTs and 25600 3-inch PMTs, JUNO will provide a unique apparatus to probe the mysteries of neutrinos, particularly the neutrino mass ordering puzzle. One of the challenges for JUNO is the high precision...
For the last 7 years Accelogic pioneered and perfected a radically new theory of numerical computing codenamed "Compressive Computing", which has an extremely profound impact on real-world computer science [1]. At the core of this new theory is the discovery of one of its fundamental theorems which states that, under very general conditions, the vast majority (typically between 70% and 80%) of...
Recently, graph neural networks (GNNs) have been successfully used for a variety of reconstruction problems in HEP. In this work, we develop and evaluate an end-to-end C++ implementation for inferencing a charged particle tracking pipeline based on GNNs. The pipeline steps include data encoding, graph building, edge filtering, GNN and track labeling and it runs on both GPUs and CPUs. The ONNX...
Particle identification is one of most fundamental tools in various particle physics experiments. For the BESIII experiment on the BEPCII, the realization of numerous physical goals heavily relies on advanced particle identification algorithms. In recent years, the emerging of quantum machine learning could potentially arm particle physics experiments with a powerful new toolbox. In this work,...
A high-precision calculation of lepton magnetic moments requires an evaluation of QED Feynman diagrams up to five independent loops.
These calculations are still important:
1) the 5-loop contributions with lepton loops to the electron g-2 are still not double-checked (and can potentially be sensitive in experiments);
2) there is a discrepancy in different calculations of the 5-loop...
An algorithm for the spinor amplitudes with massive particles is implemented in the SANC computer system framework.
Procedure for simplification of the expressions with spinor products is based on little group technique in six-dimensional space-time.
Amplitudes for bremsstrahlung processes e+e+\to (e+e+/mu+mu-/HZ/Zgamma/gamma gamma) + gamma are obtained in gauge-covariant form...
The particle-flow (PF) algorithm at CMS combines information across different detector subsystems to reconstruct a global particle-level picture of the event. At a fundamental level, tracks are extrapolated to the calorimeters and the muons system, and combined with energy deposits to reconstruct charged and neutral hadron candidates, as well as electron, photon and muon candidates.
In...
Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more difficult as the detectors increase in size to reach their physics goals. Liquid argon time projection chamber (LArTPC) neutrino experiments are expected to grow in the next decade to have 100 times more...
In recent work we computed 4-loop integrals for self-energy diagrams with 11 massive internal lines. Presently we perform numerical integration and regularization for diagrams with 8 to 11 lines, while considering massive and massless cases. For dimensional regularization, a sequence of integrals is computed depending on a parameter ($\varepsilon$) that is incorporated via the space-time...
We present an application of anomaly detection techniques based on deep recurrent autoencoders to the problem of detecting gravitational wave signals in laser interferometers. Trained on noise data, this class of algorithms could detect signals using an unsupervised strategy, i.e., without targeting a specific kind of source. We develop a custom architecture to analyze the data from two...
Over the past decades nuclear physics experiment has seen a drastic increase in complexity. With the arrival of second generation radioactive ions beams facilities all over the world, the run for exploring more and more exotic nuclei is raging. The low intensity of RI-beams require more complex setup, covering larger solid angle, and detecting a wider variety of charged and neutral particles....
Artificial Neural Networks in High Energy Physics: introduction and goals
Nowadays High Energy Physics (HEP) analyses take generally advantages of the implementation of Machine Learning techniques to optimize the discrimination between signal and background, preserving as much signal as possible. Running a classical cut-based selection would imply a severe reduction of both signal and...
NA61/SHINE is a high-energy physics experiment operating at the SPS accelerator at CERN. The physics programme of the experiment was recently extended, requiring a major upgrade of the detector setup. The main goal of the upgrade is to increase the event flow rate from 80Hz to 1kHz by exchanging the read-out electronics of the NA61/SHINE main tracking detectors (Time-Projection-Chambers -...
CYGNO is developing a gaseous Time Projection Chamber (TPC) for directional dark matter searches, to be hosted at Laboratori Nazionali del Gran Sasso (LNGS), Italy. CYGNO uses He:CF4 gas mixture at atmospheric pressure and relies on Gas Electron Multipliers (GEMs) stack for the charge amplification. Light is produced by the electrons avalanche thanks to the CF4 scintillation properties and is...
In the absence of new physics signals and in the presence of a plethora of new physics scenarios that could hide in the copiously produced LHC collision events, unbiased event reconstruction and classification methods have become a major research focus of the high-energy physics community. Unsupervised machine learning methods, often used as anomaly-detection methods, are trained on Standard...
In the absence of new physics signals and in the presence of a plethora of new physics scenarios that could hide in the copiously produced LHC collision events, unbiased event reconstruction and classification methods have become a major research focus of the high-energy physics community. Unsupervised machine learning methods, often used as anomaly-detection methods, are trained on Standard...
There has been significant interest and development in the use of graph neural networks (GNNs) for jet tagging applications. These generally provide better accuracy than CNN and energy flow algorithms by exploiting a range of GNN mechanisms, such as dynamic graph construction, equivariance, attention, and large parameterizations. In this work, we present the first apples-to-apples exploration...
An innovative approach to particle identification (PID) analyses employing machine learning techniques and its application to a physics case from the fixed-target programme at the LHCb experiment at CERN are presented. In general, a PID classifier is built by combining the response of specialized subdetectors, exploiting different techniques to guarantee redundancy and a wide kinematic...
Because the cross section of dark matter is very small compared to that of the Standard Model (SM), huge amount of simulation is required [1]. Hence, to optimize Central Processing Unit (CPU) time is crucial to increase the efficiency of dark matter research in HEP. In this work, the CPU time was studied using the MadGraph5 as a simulation toolkit for dark matter study at e+e- colliders. The...
Surrogate modeling and data-model convergence are important in any field utilizing probabilistic modeling, including High Energy Physics and Nuclear Physics. However, demonstrating that the model produces samples from the same underlying distribution as the true source can be problematic if the data is many-dimensional. The 1-D and multi-dimensional Kolmogorov-Smirnov test (ddKS) is a...
Abstract
The Large Hadron Collider’s third run poses new and interesting problems
that all experiments have to tackle in order to fully exploit the
benefits provided by the new architecture, such as the increase in the
amount of data to be recorded.
As part of the new developments that are taking place in the ALICE
experiment, payloads that use more than a single processing...
Scale factors are commonly used in HEP to improve shape agreement between distributions of data and simulation. We present a generalized deep-learning based architecture for producing shape changing scale factors, investigated in the context of bottom-quark jet- tagging algorithms within the CMS experiment.
The method utilizes an adversarial approach with three networks forming the central...
The study of the conversion decay of the omega meson into $\pi^{0}e^{+} e^{-} $ state was performed with the CMD-3 detector at the VEPP-2000 electron-positron collider in Novosibirsk. The main physical background to the process under study is radiative decay $\omega \to \pi^{0} \gamma$, where monochromatic photon converts on the material in front of the detector. The deep neural network was...
In the near future, many new high energy physics (HEP) experiments with challenging data volume are coming into operations or are planned in IHEP, China. The DIRAC-based distributed computing system has been set up to support these experiments. To get a better utilization of available distributed computing resources, it's important to provide experimental users with handy tools for the...
Analysis of the CMD-3 detector data: searching for low-energy electron-positron annihilation into $KK\pi$ and $KK\pi\pi^0$
A. A. Uskov.
Budker Institute of Nuclear Physics, Siberian Branch of the Russian Academy of Sciences.
We explored the process $e^+e^- → KK\pi$ with the СMD-3 detector at the electron-positron collider VEPP-2000. The data amassed by the СMD-3 detector in the...
We use convolutional neural networks (CNNs) to analyze monoscopic and stereoscopic images of extensive air showers registered by Cherenkov telescopes of the TAIGA experiment. The networks are trained and evaluated on Monte-Carlo simulated images to identify the type of the primary particle and to estimate the energy of the gamma rays. We compare the performance of the networks trained on...
High energy physics (HEP) is moving towards extremely high statistical experiments and super-large-scale simulation of theory such as Standard Model. In order to handle the challenge of rapidly increase of data volumes, distributed computing and storage frameworks in Big Data area like Hadoop and Spark make computations easily to scale out. While in- memory RDD based programming model assumes...
The inner tracking system of the CMS experiment, which comprise of Silicon Pixel and Silicon Strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to reconstruct the primary and secondary vertices. The movements of the different substructures of the tracker detectors driven by the operating conditions during data taking, require to regularly...
In the past decade, Data and Analysis Preservation (DAP) has
gained an increased prominence in the scope of effort of major
High Energy and Nuclear Physics (HEP/NP) experiments, driven
by the policies of the funding agencies as well as realization
of the benefits brought by DAP to the science output of many
projects in the field. It is a complex domain which in addition to
archival of...
HENP experiments are preparing for HL-LHC era, which will bring an unprecedented volume of scientific data. This data will need to be stored and processed by collaborations, but expected resource growth is nowhere near extrapolated requirements of existing models both in storage volume and compute power. In this report, we will focus on building a prototype of a distributed data processing and...
Solenoidal Tracker at RHIC (STAR) is a multipurpose experiment at the Relativistic Heavy Ion Collider (RHIC) with the primary goal to study formation and properties of the quark-gluon plasma. STAR is an international collaboration of member institutions and laboratories from around the world. Yearly data-taking period produces PBytes of raw data collected by the experiment. STAR primarily uses...
The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL).
These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material (beampipe, tracker...
The Jiangmen Underground Neutrino Observatory (JUNO) is designed to determine the neutrino mass ordering and precisely measure oscillation parameters. It is under construction at a depth of 700~m underground and comprises a central detector, water Cherenkov detector and top tracker. The central detector is designed to detect anti-neutrinos with an energy resolution of 3\% at 1~MeV, using a 20...
The high accuracy of detector simulation is crucial for modern particle physics experiments. However, this accuracy comes with a high computational cost, which will be exacerbated by the large datasets and complex detector upgrades associated with next-generation facilities such as the High Luminosity LHC. We explore the viability of regression-based machine learning (ML) approaches using...
The installation and maintenance of scientific software for research in
experimental, phenomenological, and theoretical High Energy Physics (HEP)
requires a considerable amount of time and expertise. While many tools are
available to make the task of installation and maintenance much easier,
many of these tools require maintenance on their own, have little
documentation and very few...
The higher LHC luminosity expected in Run 3 (2022+) and the consequently larger number of simultaneous proton-proton collisions (pileup) per event pose significant challenges for CMS event reconstruction. This is particularly important for event filtering at the CMS High Level Trigger (HLT), where complex reconstruction algorithms must be executed within a strict time budget.
This problem...
Collecting, storing and processing of experimental data are an integral part of modern high-energy physics experiments. Various experiment databases and corresponding information systems related to their use and support play an important role and, in many ways, combine online and offline data processing. One of them, the Configuration Database is an essential part of a complex of information...
Joint Institute for Nuclear Research has several large computing facilities: Tier1 and Tier2 grid clusters, Govorun supercomputer, cloud, and LHEP computing cluster. Each of them has different access protocols, authentication and authorization procedures, data access methods. With the help of the DIRAC Interware, we were able to integrate all these resources to provide a uniform access to all...
The declarative approach to data analysis provides high-level abstractions for users to operate on their datasets in a much more ergonomic fashion compared to imperative interfaces. ROOT offers such a tool with RDataFrame, which creates a computation graph with the operations issued by the user and executes it lazily only when the final results are queried. It has always been oriented towards...
In astrophysics, the search for sources of the highest-energy cosmic rays continues. For further progress, not only ever better observatories but also ever more realistic numerical simulations are needed. We present here a novel approach to charged particle propagation that finds its application in Simulations of particle propagation in jets of active galactic nuclei, possible sources of...
Kernel methods represent an elegant and mathematically sound approach to nonparametric learning, but so far could hardly be used in large scale problems, since naïve implementations scale poorly with data size. Recent improvements have shown the benefits of a number of algorithmic ideas, combining optimization, numerical linear algebra and random projections. These, combined with (multi-)GPU...
CMS software stack (CMSSW) is being built on a nightly basis for multiple hardware architectures and compilers, in order to benefit from the diverse platforms. In practice, still, only x86_64 is used in production, and is supported by design by the workload management tools in charge of production and analysis job delivery to the distributed computing infrastructure.
Profiting from an INFN...
Foreseen increasing demand for simulations of particle transport through detectors in High Energy Physics motivated the search for faster alternatives to Monte Carlo based simulations. Deep learning approaches provide promising results in terms of speed up and accuracy, among which generative adversarial networks (GANs) appear to be the most successful in reproducing realistic detector data....
This talk summarizes the various storage options that we implemented for the CMSWEB cluster in Kubernetes infrastructure. All CMSWEB services require storage for logs, while some services also require storage for data. We also provide a feasibility analysis of various storage options and describe the pros/cons of each technique from the perspective of the CMSWEB cluster and its users. In the...
The Liquid Argon Time Projection Chamber (LArTPC) technology is widely used in high energy physics experiments, including the upcoming Deep Underground Neutrino Experiment (DUNE). Accurately simulating LArTPC detector responses is essential for analysis algorithm development and physics model interpretations. But because of the highly diverse event topologies that can occur in LArTPCs,...
NICA (Nuclotron-based Ion Collider fAсility) is a new accelerator complex, which is under construction at the Joint Institute for Nuclear Research in Dubna to study properties of dense baryonic matter. The experiments of the NICA projects have already generated and obtained substantial volumes of event data, and it is expected that the overall number of stored events will increase from the...
The main computing and storage facility of INFN (Italian Institute for Nuclear Physics) running at CNAF hosts and manages tens of Petabytes of data produced by the LHC (Large Hadron Collider) experiments at CERN and other scientific collaborations in which INFN is involved. The majority of these data are stored on tape resources of different technologies.
All the tape drives can be used for...
Understanding the predictions of a machine learning model can be as important as achieving high performance, especially in critical application domains such as health care, cybersecurity, or financial services, among others. In scientific domains, the model interpretation can enhance the model's performance, helping to trust them accurately for its use on real data and for knowledge discovery....
Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the...
One of the largest strains on computational resources in the field of high energy physics are Monte Carlo simulations. Given that this already high computational cost is expected to increase in the high-precision era of the LHC and at future colliders, fast surrogate simulators are urgently needed. Generative machine learning models offer a promising way to provide such a fast simulation by...
Over the next decade, the ATLAS experiment will be required to operate in an increasingly harsh collision environment. To maintain physics performance, the ATLAS experiment will undergo a series of upgrades during major shutdowns. A key goal of these upgrades is to improve the capacity and flexibility of the detector readout system. To this end, the Front-End Link eXchange (FELIX) system was...
A difficult aspect of cyber security is the ability to achieve automated real time intrusion prevention across various sets of systems. To this extent, several companies are offering comprehensive solutions that leverage an “accuracy of scale” and moving much of the intelligence and detection on the Cloud, relying on an ever-growing set of data and analytics to increase decision accuracy....
This talk introduces and shows the simulated performance of two FPGA-based techniques to improve fast track finding in the ATLAS trigger. A fast hardware based track trigger is being developed in ATLAS for the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), the goal of which is to provide the high-level trigger with full-scan tracking at 100 kHz in the high pile-up conditions of...
We present the package for the simulation of DM (Dark Matter) particles in fixed target experiments. The most convenient way
of this simulation (and the only possible way in the case of beam-dump) is to simulate it in the framework of the
Monte-Carlo program performing the particle tracing in the experimental setup.
The Geant4 toolkit framework was chosen as the most popular and versatile...
High energy physics experiments essentially rely on the simulation data used for physics analyses. However, running detailed simulation models requires tremendous amount of computation resources. New approaches to speed up detector simulation are therefore needed. \
Generation of calorimeter responses is often the most expensive component of the simulation chain for HEP experiments.
It has...
The detailed detector simulation models are vital for the successful operation of modern high-energy physics experiments. In most cases, such detailed models require a significant amount of computing resources to run. Often this may not be afforded and less resource-intensive approaches are desired. In this work, we demonstrate the applicability of Generative Adversarial Networks (GAN) as the...
In recent years fully-parametric fast simulation methods based on generative models have been proposed for a variety of high-energy physics detectors. By their nature, the quality of data-driven models degrades in the regions of the phase space where the data are sparse. Since machine-learning models are hard to analyze from the physical principles, the commonly used testing procedures are...
Modern calorimeters for High Energy Physics (HEP) have very fine transverse and longitudinal segmentation to manage high incoming flux and improve particle identification capabilities. Compared to older calorimeter designs, this change alone alters the extraction of the number and energy of incident particles on the device from a simple gaussian-template clustering problem to a highly...
Fast turnaround times for LHC physics analyses are essential for scientific success. The ability to quickly perform optimizations and consolidation studies is critical. At the same time, computing demands and complexities are rising with the upcoming data taking periods and new technologies, such as deep learning.
We present a show-case of the HH->bbWW analysis at the CMS experiment, where we...
Particle tracking is a challenging pattern recognition task in experimental particle physics. Traditional algorithms based on Kalman filters show desirable performance in finding tracks originating from collision points. However, for displaced tracks, dedicated tunings are often required in order to reach sensible performance as the quality of the seed for the Kalman filter has a direct impact...
The Exa.TrkX project presents a graph neural network (GNN) technique for low-level reconstruction of neutrino interactions in a Liquid Argon Time Projection Chamber (LArTPC). GNNs are still a relatively novel technique, and have shown great promise for similar reconstruction tasks in the LHC. Graphs describing particle interactions are formed by treating each detector hit as a node, with edges...
The ATLAS Technical Coordination Expert System is a knowledge-based application describing and simulating the ATLAS infrastructure, its components, and their relationships, in order to facilitate the sharing of knowledge, improve the communication among experts, and foresee potential consequences of interventions and failures. The developed software is key for planning ahead of the future...
The ABCD method is a common background estimation method used by many physics searches in particle collider experiments and involves defining four regions based on two uncorrelated observables. The regions are defined such that there is a search region, where most signal events are expected to be, and three control regions. A likelihood-based version of the ABCD method, also referred to as the...
In recent years, a correspondence has been established between the appropriate asymptotics of deep neural networks (DNNs), including convolutional ones (CNNs), and the machine learning methods based on Gaussian processes (GPs). The ultimate goal of establishing such interrelations is to achieve a better theoretical understanding of various methods of machine learning (ML) and their...
We investigate the possibility of using Deep Learning algorithms for jet identification in the L1 trigger at HL-LHC. We perform a survey of architectures (MLP, CNN, Graph Networks) and benchmark their performance and resource consumption on FPGAs using a QKeras+hls4ml compression-aware training procedure. We use the HLS4ML jet dataset to compare the results obtained in this study to previous...
As the CMS detector is getting ready for data-taking in 2021 and beyond, the detector is expected to deliver an ever-increasing amount of data. To ensure that the data recorded from the detector has the best quality possible for physics analyses, CMS Collaboration has dedicated Data Quality Monitoring (DQM) and Data Certification (DC) working groups. These working groups are made of human...
Modeling network data traffic is the most important task in the design and construction of new network centers and campus networks. The results of the analysis of models can be applied in the reorganization of existing centers and in the configuration of data routing protocols based on the use of links. The paper shows how constant monitoring of the main directions of data transfer allows...
The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric by which the robustness of the model can be measured. We present a new method to constrain the Lipschitz constant of dense deep learning models that can also be generalized to other architectures. The method relies on a simple weight normalization scheme during training...
We present the first application of scalable deep learning with a high-performance computer (HPC) to physics analysis using the CMS simulation data with 13 TeV LHC proton-proton collision. We build a convolutional neural network (CNN) model which takes low-level information as images considering the geometry of the CMS detector. The CNN model is implemented to discriminate R-parity violating...
Tau leptons are used in a range of important ATLAS physics analyses, including the measurement of the SM Higgs boson coupling to fermions, searches for Higgs boson partners, and heavy resonances decaying into pairs of tau leptons. Events for these analyses are provided by a number of single and di-tau triggers including event topological requirements or the requirement of additional objects at...
The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because “training a neural network” equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can...
A major challenge of the high-luminosity upgrade of the CERN LHC is to single out the primary interaction vertex of the hard scattering process from the expected 200 pileup interactions that will occur each bunch crossing. To meet this challenge, the upgrade of the CMS experiment comprises a complete replacement of the silicon tracker that will allow for the first time to perform the...
The LHCb detector is undergoing a comprehensive upgrade for data taking in the LHC’s Run 3, which is scheduled to begin in 2022. The new Run 3 detector has a different, upgraded geometry and uses new tools for its description, namely DD4hep and ROOT. Besides, the visualization technologies have evolved quite a lot since Run 1, with the introduction of ubiquitous web based solutions or...
The Mu2e experiment at Fermilab searches for the charged-lepton flavor violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. If no events are observed, in three years of running Mu2e will improve the previous upper limit by four orders of magnitude in search sensitivity.
Mu2e’s Trigger and Data Acquisition System (TDAQ) uses {\it otsdaq}...
Within the FAIR Phase-0 program the algorithms of the FLES (First-Level Event Selection) package developed for the CBM experiment (FAIR/GSI, Germany) are adapted for online and offline processing in the STAR experiment (BNL, USA).
Long-lived charged particles are reconstructed in the TPC detector using the CA track finder algorithm based on the Cellular Automaton. The search for...
Containerisation is an elementary tool for sharing IT resources: It is more light-weight than full virtualisation, but offers comparable isolation. We argue that for many use-cases which are typically approached with standard containerisation tools, less than full isolation is sufficient: Sometimes, only networking or only storage or both need to be different from their native, unisolated...
There has been significant development recently in generative models for accelerating LHC simulations. Work on simulating jets has primarily used image-based representations, which tend to be sparse and of limited resolution. We advocate for the more natural 'particle cloud' representation of jets, i.e. as a set of particles in momentum space, and discuss four physics- and...
Learning the hierarchy of graphs is relevant in a variety of domains, as they are commonly used to express the chronological interactions in data structures. One application is in Flavor Physics, as the natural representation of a particle decay process is a rooted tree graph.
Analyzing collision events involving missing particles or neutrinos requires knowledge of the full decay tree....
The wide angular distribution of the incoming cosmic ray muons in connection with either incident angle or azimuthal angle is a challenging trait led to a drastic particle loss in the course of parametric computations from the GEANT4 simulations since the tomographic configurations as well as the target geometries also influence the processable number of the detected particles apart from the...
The Belle II experiment is located at the asymmetric SuperKEKB $e^+ e^-$ collider in Tsukuba, Japan. The Belle II electromagnetic calorimeter (ECL) is designed to measure the energy deposited by charged and neutral particles. It also provides important contributions to the particle identification system. Identification of low-momenta muons and pions in the ECL is crucial if they do not reach...
HEP experiments heavily rely on the production and the storage of large datasets of simulated events. At the LHC, simulation workflows require about half of the available computing resources of a typical experiment. With the foreseen High Luminosity LHC upgrade, data volume and complexity are going to increase faster than the expected improvements in computing infrastructure. Speeding up the...
Identification of hadronic decays of highly Lorentz-boosted W/Z/Higgs bosons and top quarks provides powerful handles to a wide range of new physics searches and Standard Model measurements at the LHC. In this talk, we present ParticleNeXt, a new graph neural network (GNN) architecture tailored for jet tagging. With the introduction of novel components such as pairwise features, attentive...
Heterogeneous Computing will play a fundamental role in the CMS reconstruction to face the challenges that will be posed by the HL-LHC phase. Several computing architectures and vendors are currently available to build an Heterogeneous Computing Farm for the CMS experiment. However, specialized implementations for each of these architectures is not sustainable in terms of development,...
The CernVM File System (CernVM-FS) is a global read-only POSIX file system that provides scalable and reliable software distribution to numerous scientific collaborations. It gives access to more than a billion binary files of experiment application software stacks and operating system containers to end user devices, grids, clouds, and supercomputers. CernVM-FS is asymmetric by construction....
In this talk, we present the novel implementation of a non-differentiable metric approximation with a corresponding loss-scheduling based on the minimization of a figure-of-merit related function typical of particle physics (the so-called Punzi figure of merit). We call this new loss-scheduling a "Punzi-loss function" and the neural network that minimizes it a "Punzi-net". We tested the...
The pyrate framework provides a dynamic, versatile, and memory-efficient approach to data format transformations, object reconstruction and data analysis in particle physics.The framework is implemented with the python programming language, allowing easy access to the scientific python package ecosystem and commodity big data technologies. Developed within the context of the SABRE experiment...
Histogramming for Python has been transformed by the Scikit-HEP family of libraries, starting with boost-histogram, a core library for high performance Pythonic histogram creation and manipulation based on the Boost C++ libraries. This was extended by Hist with plotting, analysis friendly shortcuts, and much more. And UHI is a specification that allows histogramming and plotting libraries,...
In this talk we present a novel method to reconstruct the kinematics of neutral-current deep inelastic scattering (DIS) using a deep neural network (DNN). Unlike traditional methods, it exploits the full kinematic information of both the scattered electron and the hadronic-final state, and it accounts for QED radiation by identifying events with radiated photons and event-level momentum...
We present a new version of the Monte Carlo event generator ReneSANCe. The generator takes into account complete one-loop electroweak (EW) corrections, QED corrections in leading log approximation (LLA) and some higher order QED and EW corrections to processes at e^+e^- colliders with finite particle masses and arbitrary polarizations of intitial particles. ReneSANCe effectively operates in...
The volume of data processed by the Large Hadron Collider experiments demands sophisticated selection rules typically based on machine learning algorithms. One of the shortcomings of these approaches is their profound sensitivity to the biases in training samples. In the case of particle identification (PID), this might lead to degradation of the efficiency for some decays on validation due to...
Recent developments in software to address challenges in the High-Luminosity LHC (HL-LHC) era allow novel approaches when interacting with the data and performing physics analysis. We employed software components primarily from IRIS-HEP to construct an analysis workflow of an ongoing ATLAS Run-2 physics analysis in the python ecosystem. The software components in the analysis workflow include...
In an earlier work [1], we introduced dual-Parameterized Quantum Circuit (PQC) Generative Adversarial Networks (GAN), an advanced prototype of quantum GAN, which consists of a classical discriminator and two quantum generators that take the form of PQCs. We have shown the model can imitate calorimeter outputs in High-Energy Physics (HEP), interpreted as reduced size pixelated images. But the...
ServiceX is a cloud-native distributed application that transforms data into columnar formats in the python ecosystem and ROOT framework. Along with the transformation, is applies filtering, and thinning operations to reduce the data load sent to the client. ServiceX, designed for easy deployment to a Kubernetes cluster, is runs near the data, scanning TB’s of data to send GB’s to a client or...
The Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) is undertaking a Phase II upgrade program to face the harsh conditions imposed by the High Luminosity LHC (HL-LHC). This program comprises the installation of a new timing layer to measure the time of minimum ionizing particles (MIPs) with a time resolution of 30-40 ps. The time information of the tracks from this new...
DUNE is a cutting edge experiment aiming to study neutrinos in detail, with a
special focus on the flavor oscillation mechanism. ProtoDUNE-SP (the prototype
of the DUNE Far detector Single Phase TPC), has been built and operated at CERN
and a full suite of reconstruction tools have been developed. Pandora is a
multi-algorithm framework that implements reconstructions tools: a large number...
Deep neural networks are rapidly gaining popularity in physics research. While python-based deep learning frameworks for training models in GPU environments develop and mature, a good solution that allows easy integration of inference of trained models into conventional C++ and CPU-based scientific computing workflow seems lacking.
We report the latest development in ROOT/TMVA that aims to...
Deep Learning (DL) methods and Computer Vision are becoming important tools for event reconstruction in particle physics detectors. In this work, we report on the use of Submanifold Sparse Convolutional Neural Networks (SparseNet) for the classification of track and shower hits from a DUNE prototype liquid-argon detector at CERN (ProtoDUNE). By taking advantage of the three-dimensional nature...
Lattice quantum chromodynamics (lattice QCD) is the non-perturbative definition of the QCD theory from first principle and can be systematically improved, meanwhile, it is one of the most important high performance computing application in high energy physics. The physics research of lattice QCD benefited enormously from the development of computer hardware and algorithm, and particle...
The triggerless readout of data corresponding to a 30 MHz event rate at the upgraded LHCb experiment together with a software-only High Level Trigger will enable the highest possible flexibility for trigger selections. During the first stage (HLT1), track reconstruction and vertex fitting for charged particles enable a broad and efficient selection process to reduce the event rate to 1 MHz....
In classical deep learning, a number of studies have proven that noise plays a crucial role in the training of neural networks. Artificial noises are often injected in order to make the model more robust, faster converging, and stable. Meanwhile, quantum computing, a completely new paradigm of computation, is characterized by statistical uncertainty from its probabilistic nature. Furthermore,...
We present a specialised layer for generative modeling of LHC events with generative adversarial networks. We use Lorentz boosts, rotations, momentum and energy conservation to build a network cell generating a 2-body particle decay. This cell is stacked consecutively in order to model two staged decays, respecting the symmetries across the decay chain. We allow for modifications of the...
The ATLAS experiment at the Large Hadron Collider (LHC) relies heavily on simulated data, requiring the production of billions of Monte Carlo (MC)-based proton-proton collisions for every run period. As such, the simulation of collisions (events) is the single biggest CPU resource consumer for the experiment. ATLAS's finite computing resources are at odds with the expected conditions during...
A unique experiment was conducted by the STAR Collaboration in 2018 to investigate differences between collisions of nuclear isobars, a potential key to unraveling one of the physics mysteries in our field: why the universe is made predominantly of matter. Enhancing the credibility of findings was deemed to hinge on blinding analyzers from knowing which dataset they were examining,...
The emerging applications of cosmic ray muon tomography lead to a significant rise in the utilization of the cosmic particle generators, e.g. CRY, CORSIKA, or CMSCGEN, where the fundamental parameters such as the energy spectrum and the angular distribution about the generated muons are represented in the continuous forms routinely governed implicitly by the probability density functions over...
We investigate the application of object condensation to particle tracking at the LHC. Designed having in mind calorimeter clustering and successfully employed on high-granularity calorimeter reconstruction for HL-LHC, object condensation is a generic clustering methods that could be applied to many problems within and outside HEP. Using the TrackML challenge dataset, we train a tracking...
In this contribution we will show an innovative approach based on Bayesian networks and linear algebra providing a solid and complete solution to the problem of the detector response and the related systematic effects. As a case study, we will consider the Dark Matter (DM) direct detection searches. In fact, in the past decades, a huge experimental effort has been developed to ...
Many HEP analyses are adopting the concept of vectorised computing, often making them increasingly performant and resource-efficient.
While a variety of computing steps can be vectorised directly, some calculations are challenging to implement.
One of these is the analytical neutrino reconstruction which involves fitting that naturally varies between events.
We show a vectorised...
The Jiangmen Underground Neutrino Observatory (JUNO), currently under construction in the south of China, is the largest Liquid Scintillator (LS) detector in the world. JUNO is a multipurpose neutrino experiment designed to determine neutrino mass ordering, precisely measure oscillation parameters, and study solar neutrinos, supernova neutrinos, geo-neutrinos and atmospheric neutrinos. The...