Computing centres, including those used to process High-Energy Physics data and simulations, are increasingly providing significant fractions of their computing resources using hardware architectures other than x86 CPUs, with GPUs being a commonly available alternative. GPUs can provide excellent computational performance at a good price point for tasks that can be suitably parallelized....
Analysis on HEP data is an iterative process in which the results of one step often inform the next. In an exploratory analysis, it is common to perform one computation on a collection of events, then view the results (often with histograms) to decide what to try next. Awkward Array is a Scikit-HEP Python package that enables data analysis with array-at-a-time operations to implement cuts as...
In the near future, many new high energy physics (HEP) experiments with challenging data volume are coming into operations or are planned in IHEP, China. The DIRAC-based distributed computing system has been set up to support these experiments. To get a better utilization of available distributed computing resources, it's important to provide experimental users with handy tools for the...
The use of Ring Imaging Cherenkov detectors (RICH) offers a powerful technique for identifying the particle species in particle physics. These detectors produce 2D images formed by rings of individual photons superimposed on a background of photon rings from other particles.
The RICH particle identification (PID) is essential to the LHCb experiment at CERN. While the current PID algorithm...
The Jiangmen Underground Neutrino Observatory (JUNO) is designed to determine the neutrino mass ordering and precisely measure oscillation parameters. It is under construction at a depth of 700~m underground and comprises a central detector, water Cherenkov detector and top tracker. The central detector is designed to detect anti-neutrinos with an energy resolution of 3\% at 1~MeV, using a 20...
A geometry management system (GMS) is designed for the Offline Software
of Super Tau Charm Facility (STCF) in China. Based on the eXtensible Markup Language
(XML) and Detector Description Toolkit for High Energy Physics Experiments (DD4Hep) ,
the system provides a consistent detector-geometry description for different offline applications,
such as simulation, reconstruction and...
Modern datacenters need distributed filesystems to provide user applications with access to data stored on a large number of nodes. The ability to mount a distributed filesystem and leverage its native application programming interfaces in a Docker container, combined with the advanced orchestration features provided by Kubernetes, can improve flexibility in installing, monitoring and...
CMS software stack (CMSSW) is being built on a nightly basis for multiple hardware architectures and compilers, in order to benefit from the diverse platforms. In practice, still, only x86_64 is used in production, and is supported by design by the workload management tools in charge of production and analysis job delivery to the distributed computing infrastructure.
Profiting from an INFN...
In the present work the possibility to exploit EOS, an open-source storage software solution for multi-PB storage management at CERN Large Hadron Collider, has been investigated in order to deploy a distributed filesystem over a storage backend provided by CEPH, an open-source software platform capable to expose data through interfaces for object, block and posix-compliant storage.
The work...
One of the largest strains on computational resources in the field of high energy physics are Monte Carlo simulations. Given that this already high computational cost is expected to increase in the high-precision era of the LHC and at future colliders, fast surrogate simulators are urgently needed. Generative machine learning models offer a promising way to provide such a fast simulation by...
High energy physics experiments essentially rely on the simulation data used for physics analyses. However, running detailed simulation models requires tremendous amount of computation resources. New approaches to speed up detector simulation are therefore needed. \
Generation of calorimeter responses is often the most expensive component of the simulation chain for HEP experiments.
It has...
In this contribution, we apply deep learning object detection techniques based on convolutional blocks to jet identification and reconstruction problem encountered at the CERN Large Hadron Collider. Particles reconstructed through the Particle Flow algorithm can be represented as an image composed of calorimeter and tracker cells as an input to a Single Shot Detection network. The algorithm,...
The LHCb detector is undergoing a comprehensive upgrade for data taking in the LHC’s Run 3, which is scheduled to begin in 2022. The new Run 3 detector has a different, upgraded geometry and uses new tools for its description, namely DD4hep and ROOT. Besides, the visualization technologies have evolved quite a lot since Run 1, with the introduction of ubiquitous web based solutions or...
Containerisation is an elementary tool for sharing IT resources: It is more light-weight than full virtualisation, but offers comparable isolation. We argue that for many use-cases which are typically approached with standard containerisation tools, less than full isolation is sufficient: Sometimes, only networking or only storage or both need to be different from their native, unisolated...
Exploring anomalous objects from beyond standard model (BSM) signatures is one important mission of the LHC experiments. Recently, new particles in the sub-GeV scale have received more and more attention. The light pseudo-scalar such as axion-like particles (ALPs) and light scalar such as dark Higgs are proposed by many BSM models and can be taken as mediators of some sub-GeV dark matter...
In this talk, we present the novel implementation of a non-differentiable metric approximation with a corresponding loss-scheduling based on the minimization of a figure-of-merit related function typical of particle physics (the so-called Punzi figure of merit). We call this new loss-scheduling a "Punzi-loss function" and the neural network that minimizes it a "Punzi-net". We tested the...
DUNE is a cutting edge experiment aiming to study neutrinos in detail, with a
special focus on the flavor oscillation mechanism. ProtoDUNE-SP (the prototype
of the DUNE Far detector Single Phase TPC), has been built and operated at CERN
and a full suite of reconstruction tools have been developed. Pandora is a
multi-algorithm framework that implements reconstructions tools: a large number...
Deep Learning (DL) methods and Computer Vision are becoming important tools for event reconstruction in particle physics detectors. In this work, we report on the use of Submanifold Sparse Convolutional Neural Networks (SparseNet) for the classification of track and shower hits from a DUNE prototype liquid-argon detector at CERN (ProtoDUNE). By taking advantage of the three-dimensional nature...
In High Energy Physics (HEP) experiments, it is useful for physics analysis and outreach if the event display software can provide fancy visualization effect. Unity is a professional software that can provide 3D modeling and animation production. GDML format files are commonly used for detector description in HEP experiments. In this work, we present a method for automating the import of GDML...
In this contribution we will show an innovative approach based on Bayesian networks and linear algebra providing a solid and complete solution to the problem of the detector response and the related systematic effects. As a case study, we will consider the Dark Matter (DM) direct detection searches. In fact, in the past decades, a huge experimental effort has been developed to ...