Conveners
Accelerators: Tue PM
- Felice Pantaleo (CERN)
- Simon George (Royal Holloway, University of London)
Accelerators: Wed PM
- Stewart Martin-Haugh (Science and Technology Facilities Council STFC (GB))
- Dorothea Vom Bruch (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France)
High energy physics has a constant demand for random number generators (RNGs) with high statistical quality. In this paper, we present ROOT's implementation of the RANLUX++ generator. We discuss the choice of relying only on standard C++ for portability reasons. Building on an initial implementation, we describe a set of optimizations to increase generator speed. This allows to reach...
The HIBEAM/NNBAR program is a proposed two-stage experiment for the European Spallation Source focusing on searches for baryon number violation via processes in which neutrons convert to anti-neutrons. This paper outlines the computing and detector simulation framework for the HIBEAM/NNBAR program. The simulation is based on predictions of neutron flux and neutronics together with signal and...
The management of separate memory spaces of CPUs and GPUs brings an additional burden to the development of software for GPUs. To help with this, CUDA unified memory provides a single address space that can be accessed from both CPU and GPU. The automatic data transfer mechanism is based on page faults generated by the memory accesses. This mechanism has a performance cost, that can be with...
Programming for a diverse set of compute accelerators in addition to the CPU is a challenge. Maintaining separate source code for each architecture would require lots of effort, and development of new algorithms would be daunting if it had to be repeated many times. Fortunately there are several portability technologies on the market such as Alpaka, Kokkos, and SYCL. These technologies aim to...
We present the porting to heterogeneous architectures of the algorithm used for applying linear transformations of raw energy deposits in the CMS High Granularity Calorimeter (HGCAL). This is the first heterogeneous algorithm to be fully integrated with HGCAL’s reconstruction chain. After introducing the latter and giving a brief description of the structural components of HGCAL relevant for...
ALICE will significantly increase its Pb--Pb data taking rate from the 1\,kHz of triggered readout in Run 2 to 50 kHz of continuous readout for LHC Run 3.
Updated tracking detectors are installed for Run 3 and a new two-phase computing strategy is employed.
In the first synchronous phase during the data taking, the raw data is compressed for storage to an on-site disk buffer and the...
Opticks is an open source project that accelerates optical photon simulation by integrating NVIDIA GPU ray tracing, accessed via NVIDIA OptiX, with
Geant4 toolkit based simulations. A single NVIDIA Turing architecture GPU has been measured to provide optical photon simulation speedup factors exceeding 1500 times single threaded Geant4 with a full JUNO analytic GPU geometry automatically...
The LZ collaboration aims to directly detect dark matter by using a liquid xenon Time Projection Chamber (TPC). In order to probe the dark matter signal, observed signals are compared with simulations that model the detector response. The most computationally expensive aspect of these simulations is the propagation of photons in the detector’s sensitive volume. For this reason, we propose to...
In this proceedings we present MadFlow, a new framework for the automation of Monte Carlo (MC) simulation on graphics processing units (GPU) for particle physics processes. In order to automate MC simulation for a generic number of processes, we design a program which provides to the user the possibility to simulate custom processes through the MG5_aMC@NLO framework. The pipeline includes a...
Celeritas is a new computational transport code designed for high-performance
simulation of high-energy physics detectors. This work describes some of its
current capabilities and the design choices that enable the rapid development
of efficient on-device physics. The abstractions that underpin the code design
facilitate low-level performance tweaks that require no changes to the
...
The increasing number of high-performance computing centers around the globe is providing physicists and other researchers access to heterogeneous systems -- comprising multiple central processing units and graphics processing units per node -- with various platforms. However, it is more often than not the case that domain scientists have limited resources such that writing multiple...
Modern experiments in high energy physics analyze millions of events recorded in particle detectors to select the events of interest and make measurements of physics parameters. These data can often be stored as tabular data in files with detector information and reconstructed quantities. Current techniques for event selection in these files lack the scalability needed for high performance...