The physics output of modern experimental HEP collaborations hinges not only on the quality of its software but also on the ability of the collaborators to make the best possible use of it.
With the COVID-19 pandemic making in-person training impossible, the training paradigm at Belle II was shifted towards one of guided self-study.
To that end, the study material was rebuilt from...
The study of the conversion decay of the omega meson into $\pi^{0}e^{+} e^{-} $ state was performed with the CMD-3 detector at the VEPP-2000 electron-positron collider in Novosibirsk. The main physical background to the process under study is radiative decay $\omega \to \pi^{0} \gamma$, where monochromatic photon converts on the material in front of the detector. The deep neural network was...
Analysis of the CMD-3 detector data: searching for low-energy electron-positron annihilation into $KK\pi$ and $KK\pi\pi^0$
A. A. Uskov.
Budker Institute of Nuclear Physics, Siberian Branch of the Russian Academy of Sciences.
We explored the process $e^+e^- → KK\pi$ with the СMD-3 detector at the electron-positron collider VEPP-2000. The data amassed by the СMD-3 detector in the...
In astrophysics, the search for sources of the highest-energy cosmic rays continues. For further progress, not only ever better observatories but also ever more realistic numerical simulations are needed. We present here a novel approach to charged particle propagation that finds its application in Simulations of particle propagation in jets of active galactic nuclei, possible sources of...
Kernel methods represent an elegant and mathematically sound approach to nonparametric learning, but so far could hardly be used in large scale problems, since naïve implementations scale poorly with data size. Recent improvements have shown the benefits of a number of algorithmic ideas, combining optimization, numerical linear algebra and random projections. These, combined with (multi-)GPU...
The detailed detector simulation models are vital for the successful operation of modern high-energy physics experiments. In most cases, such detailed models require a significant amount of computing resources to run. Often this may not be afforded and less resource-intensive approaches are desired. In this work, we demonstrate the applicability of Generative Adversarial Networks (GAN) as the...
Fast turnaround times for LHC physics analyses are essential for scientific success. The ability to quickly perform optimizations and consolidation studies is critical. At the same time, computing demands and complexities are rising with the upcoming data taking periods and new technologies, such as deep learning.
We present a show-case of the HH->bbWW analysis at the CMS experiment, where we...
The Exa.TrkX project presents a graph neural network (GNN) technique for low-level reconstruction of neutrino interactions in a Liquid Argon Time Projection Chamber (LArTPC). GNNs are still a relatively novel technique, and have shown great promise for similar reconstruction tasks in the LHC. Graphs describing particle interactions are formed by treating each detector hit as a node, with edges...
As part of CERN-GEANT-PRACE-SKA collaboration and in the context of EGI-ACE (Advanced Computing for the European Open Science Cloud ) collaborators are working towards enabling
efficient HPC use for Big Data sciences. Approaching HPC site with High Throughput
Computing (HTC) workloads presents unique challenges in areas concerning data
ingress/egress, use of shared storage systems, and...
The ABCD method is a common background estimation method used by many physics searches in particle collider experiments and involves defining four regions based on two uncorrelated observables. The regions are defined such that there is a search region, where most signal events are expected to be, and three control regions. A likelihood-based version of the ABCD method, also referred to as the...
Baikal-GVD is a large scale underwater neutrino telescope currently under construction in Lake Baikal. The experiment is aimed at the study of the high-energy cosmic neutrinos and the search for their sources. The principal component of the telescope is the three-dimensional array of optical modules (OMs) which register Cherenkov light associated with the neutrino-induced particles. The OMs...
Within the FAIR Phase-0 program the algorithms of the FLES (First-Level Event Selection) package developed for the CBM experiment (FAIR/GSI, Germany) are adapted for online and offline processing in the STAR experiment (BNL, USA).
Long-lived charged particles are reconstructed in the TPC detector using the CA track finder algorithm based on the Cellular Automaton. The search for...
Learning the hierarchy of graphs is relevant in a variety of domains, as they are commonly used to express the chronological interactions in data structures. One application is in Flavor Physics, as the natural representation of a particle decay process is a rooted tree graph.
Analyzing collision events involving missing particles or neutrinos requires knowledge of the full decay tree....
Histogramming for Python has been transformed by the Scikit-HEP family of libraries, starting with boost-histogram, a core library for high performance Pythonic histogram creation and manipulation based on the Boost C++ libraries. This was extended by Hist with plotting, analysis friendly shortcuts, and much more. And UHI is a specification that allows histogramming and plotting libraries,...
ServiceX is a cloud-native distributed application that transforms data into columnar formats in the python ecosystem and ROOT framework. Along with the transformation, is applies filtering, and thinning operations to reduce the data load sent to the client. ServiceX, designed for easy deployment to a Kubernetes cluster, is runs near the data, scanning TB’s of data to send GB’s to a client or...
Deep neural networks are rapidly gaining popularity in physics research. While python-based deep learning frameworks for training models in GPU environments develop and mature, a good solution that allows easy integration of inference of trained models into conventional C++ and CPU-based scientific computing workflow seems lacking.
We report the latest development in ROOT/TMVA that aims to...
The great success of the Tracking Machine Learning Challenges (TrackML) contracted in two phases (accuracy phase from April to August, throughput phase from September to November 2018) has proven the need of an easy accessible and yet challenging dataset for algorithm design and further R&D. The released TrackML dataset is to date heavily used by several research groups at the forefront of...
A unique experiment was conducted by the STAR Collaboration in 2018 to investigate differences between collisions of nuclear isobars, a potential key to unraveling one of the physics mysteries in our field: why the universe is made predominantly of matter. Enhancing the credibility of findings was deemed to hinge on blinding analyzers from knowing which dataset they were examining,...
Many HEP analyses are adopting the concept of vectorised computing, often making them increasingly performant and resource-efficient.
While a variety of computing steps can be vectorised directly, some calculations are challenging to implement.
One of these is the analytical neutrino reconstruction which involves fitting that naturally varies between events.
We show a vectorised...