Conveners
Track 1: Computing Technology for Physics Research
- Chang-Seong Moon (Kyungpook National University (KR))
- Patricia Mendez Lorenzo (CERN)
Track 1: Computing Technology for Physics Research
- Gordon Watts (University of Washington (US))
- Marilena Bandieramonte (University of Pittsburgh (US))
Track 1: Computing Technology for Physics Research
- Gordon Watts (University of Washington (US))
- Marilena Bandieramonte (University of Pittsburgh (US))
Track 1: Computing Technology for Physics Research
- Matthew Feickert (Univ. Illinois at Urbana Champaign (US))
- Gordon Watts (University of Washington (US))
Description
This track includes topics that impact how we do physics analysis and research that are related to the enabling technology.
More information of the scientific programme: https://indico.cern.ch/event/855454/program
-
Lu Wang (Computing Center,Institute of High Energy Physics, CAS)29/11/2021, 17:20Track 1: Computing Technology for Physics ResearchOral
Problematic I/O pattern is the major cause of low efficiency HEP jobs. When the computing cluster is partially occupied by jobs with problematical I/O patterns, the overall CPU efficiency will dramatically drop down. In a cluster with thousands of users, locating the source of an anomalous workload is not an easy task. Automatic anomaly detection of I/O behavior can largely alleviate the...
Go to contribution page -
David Rousseau (IJCLab-Orsay)29/11/2021, 17:40Track 1: Computing Technology for Physics ResearchOral
Future HEP experiments will have ever higher read-out rate. It is then essential to explore new hardware paradigms for large scale computations. In this work we consider the Optical Processing Unit (OPU) from [LightOn][1], which is an optical device allowing to compute in a fast analog way the multiplication of an input vector of size 1 million by a 1 million x 1 million fixed random matrix,...
Go to contribution page -
Bruno Alves (LIP Laboratorio de Instrumentacao e Fisica Experimental de Part)29/11/2021, 18:00Track 1: Computing Technology for Physics ResearchOral
We present a decisive milestone in the challenging event reconstruction of the CMS High Granularity Calorimeter (HGCAL): the deployment to the official CMS software of the GPU version of the clustering algorithm (CLUE). The direct GPU linkage of CLUE to the preceding energy deposits calibration step is thus made possible, avoiding data transfers between host and device, further extending the...
Go to contribution page -
Dr Sofia Vallecorsa (CERN)29/11/2021, 18:20Track 1: Computing Technology for Physics ResearchOral
The Worldwide LHC Computing Grid (WLCG) is the infrastructure enabling the storage and pro-cessing of the large amount of data generated by the LHC experiments, and in particular the ALICE experiment among them. With the foreseen increase in the computing requirements of the future HighLuminosity LHC experiments, a data placement strategy which increases the efficiency of the WLCG computing...
Go to contribution page -
Stephen Nicholas Swatman (University of Amsterdam (NL))29/11/2021, 18:40Track 1: Computing Technology for Physics ResearchOral
Programmers using the C++ programming language are increasingly taught to manage memory implicitly through containers provided by the C++ standard library. However, many heterogeneous programming platforms require explicit allocation and deallocation of memory, which is often discouraged in “best practice” C++ programming, and this discrepancy in memory management strategies can be daunting...
Go to contribution page -
Ingo Müller (ETH Zurich)29/11/2021, 19:00Track 1: Computing Technology for Physics ResearchOral
In the domain of high-energy physics (HEP), query languages in general and SQL in particular have found limited acceptance. This is surprising since HEP data analysis matches the SQL model well: the data is fully structured and queried using mostly standard operators. To gain insights on why this is the case, we perform a comprehensive analysis of six diverse, general-purpose data processing...
Go to contribution page -
Christina Agapopoulou (Centre National de la Recherche Scientifique (FR))30/11/2021, 17:00Track 1: Computing Technology for Physics ResearchOral
From 2022 onward, the upgraded LHCb experiment will use a triggerless readout system collecting data at an event rate of 30 MHz. A software-only High Level Trigger will enable unprecedented flexibility for trigger selections. During the first stage (HLT1), a sub-set of the full offline track reconstruction for charged particles is run to select particles of interest based on single or...
Go to contribution page -
Andrea Bocci (CERN), CMS Collaboration30/11/2021, 17:20Track 1: Computing Technology for Physics ResearchOral
At the start of the upcoming LHC Run-3, CMS will deploy a heterogeneous High Level Trigger farm composed of x86 CPUs and NVIDIA GPUs. In order to guarantee that the HLT can run on machines without any GPU accelerators - for example as part of the large scale Monte Carlo production running on the grid, or when individual developers need to optimise specific triggers - the HLT reconstruction has...
Go to contribution page -
Nuno Dos Santos Fernandes (LIP Laboratorio de Instrumentacao e Fisica Experimental de Particulas (PT))30/11/2021, 17:40Track 1: Computing Technology for Physics ResearchOral
After the Phase II Upgrade of the LHC, expected for the period between 2025-26, the average
Go to contribution page
number of collisions per bunch crossing at the LHC will increase from the Run-2 average value
of 36 to a maximum of 200 pile-up proton-proton interactions per bunch crossing. The ATLAS
detector will also undergo a major upgrade programme to be able to operate it in such a harsh
conditions with the... -
Kai Lukas Unger (Karlsruhe Institute of Technology (KIT))30/11/2021, 18:00Track 1: Computing Technology for Physics ResearchOral
The z-vertex track trigger estimates the collision origin in the Belle II experiment using neural networks to reduce the background. The main part is a pre-trained multilayer perceptron. The task of this perceptron is to estimate the z-vertex of the collision to suppress background from outside the interaction point. For this, a low latency real-time FPGA implementation is needed. We present...
Go to contribution page -
Andrei Gheata (CERN)30/11/2021, 18:20Track 1: Computing Technology for Physics ResearchOral
Several online and offline applications in high-energy physics have benefitted from running on graphics processing units (GPUs), taking advantage of their processing model. To date, however, general HEP particle transport simulation is not one of them, due to difficulties in mapping the complexity of its components and workflow to the GPU’s massive parallelism features. Deep code stacks, with...
Go to contribution page -
Ioana Ifrim (Princeton University (US))30/11/2021, 18:40Track 1: Computing Technology for Physics ResearchOral
Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of AD is guaranteed to be more precise than the numerical alternative and have at most a constant...
Go to contribution page -
Marco Barbone (Imperial College London)30/11/2021, 19:00Track 1: Computing Technology for Physics ResearchOral
We present results from a stand-alone simulation of electron single coulomb scattering as implemented completely on an FPGA architecture and compared with an identical simulation on a standard CPU. FPGA architectures offer unprecedented speed-up capability for Monte Carlo simulations, however with the caveats of lengthy development cycles and resource limitation particularly in terms of...
Go to contribution page -
Wenhao Huang (Shandong University)01/12/2021, 17:00Track 1: Computing Technology for Physics ResearchOral
The Super Tau Charm Facility (STCF) is a high-luminosity electron–positron
Go to contribution page
collider proposed in China, for the study of charm and tau physics. The Offline Software of Super Tau Charm Facility (OSCAR) is designed and developed
based on SNiPER, a lightweight common framework for HEP experiments. Several state-of-art software and tools in the HEP community are adopted, such as
the Detector... -
Yixiang Yang (Institute of High Energy Physics)01/12/2021, 17:20Track 1: Computing Technology for Physics ResearchOral
The JUNO experiment is being built mainly to determine the neutrino mass hierarchy by detecting neutrinos generated in the Yangjiang and Taishan nuclear plants in southern China. The detector will record 2 PB raw data every year, but each day it can only collect about 60 neutrino events scattered among huge background events. Selection of extremely sparse neutrino events brings a big challenge...
Go to contribution page -
Riccardo Maria Bianchi (University of Pittsburgh (US))01/12/2021, 17:40Track 1: Computing Technology for Physics ResearchOral
The GeoModel toolkit is an open-source suite of standalone tools that empowers the user with lightweight tools to describe, visualize, test, and debug detector descriptions and geometries for HEP standalone studies and experiments. GeoModel has been designed with independence and responsiveness in mind and offers a development environment free of other large HEP tools and frameworks, and with...
Go to contribution page -
Joana Niermann (Georg August Universitaet Goettingen (DE))01/12/2021, 18:00Track 1: Computing Technology for Physics ResearchOral
A detailed geometry description is essential to any high quality track reconstruction application. In current C++ based track reconstruction software libraries this is often achieved by an object oriented, polymorphic geometry description that implements different shapes and objects by extending a common base class. Such a design, however, has been shown to be problematic when attempting to...
Go to contribution page -
Florian Till Groetschla (KIT - Karlsruhe Institute of Technology (DE))01/12/2021, 18:20Track 1: Computing Technology for Physics ResearchOral
The performance of I/O intensive applications is largely determined by the organization of data and the associated insertion/extraction techniques. In this paper we present the design and implementation of an application that is targeted at managing data received (up to ~150 Gb/s payload throughput) into host DRAM, buffering data for several seconds, matched with the DRAM size, before being...
Go to contribution page -
Alexei Klimentov (Brookhaven National Laboratory (US))01/12/2021, 18:40Track 1: Computing Technology for Physics ResearchOral
The High Luminosity upgrade to the LHC, which aims for a ten-fold increase in the luminosity of proton-proton collisions at an energy of 14 TeV, is expected to start operation in 2028/29, and will deliver an unprecedented volume of scientific data at the multi-exabyte scale. This amount of data has to be stored and the corresponding storage system must ensure fast and reliable data delivery...
Go to contribution page -
Piotr Konopka (CERN)01/12/2021, 19:00Track 1: Computing Technology for Physics ResearchOral
The ALICE experiment at the CERN LHC (Large Hadron Collider) is undertaking a major upgrade during the LHC Long Shutdown 2 in 2019-2021, which includes a new computing system called O2 (Online-Offline). The raw data input from the ALICE detectors will increase a hundredfold, up to 3.5 TB/s. By reconstructing the data online, it will be possible to compress the data stream down to 100 GB/s...
Go to contribution page -
Nick Smith (Fermi National Accelerator Lab. (US))02/12/2021, 11:00Track 1: Computing Technology for Physics ResearchOral
Query languages for High Energy Physics (HEP) are an ever present topic within the field. A query language that can efficiently represent the nested data structures that encode the statistical and physical meaning of HEP data will help analysts by ensuring their code is more clear and pertinent. As the result of a multi-year effort to develop an in-memory columnar representation of high energy...
Go to contribution page -
Jim Pivarski (Princeton University)02/12/2021, 11:20Track 1: Computing Technology for Physics ResearchOral
Awkward Array 0.x was written entirely in Python, and Awkward Array 1.x was a fresh rewrite with a C++ core and a Python interface. Ironically, the Awkward Array 2.x project is translating most of that core back into Python (leaving the interface untouched). This is because we discovered surprising and subtle issues in Python-C++ integration that can be avoided with a more minimal coupling: we...
Go to contribution page -
Gene Van Buren (Brookhaven National Laboratory), Jerome LAURET (Brookhaven National Laboratory), Ivan Amos Cali (Massachusetts Inst. of Technology (US)), Dr Juan Gonzalez (Accelogic), Philippe Canal (Fermi National Accelerator Lab. (US)), Mr Rafael Nunez, Yueyang Ying (Massachusetts Inst. of Technology (US))02/12/2021, 11:40Track 1: Computing Technology for Physics ResearchOral
For the last 7 years Accelogic pioneered and perfected a radically new theory of numerical computing codenamed "Compressive Computing", which has an extremely profound impact on real-world computer science [1]. At the core of this new theory is the discovery of one of its fundamental theorems which states that, under very general conditions, the vast majority (typically between 70% and 80%) of...
Go to contribution page -
Alina Lazar (Youngstown State University)02/12/2021, 12:00Track 1: Computing Technology for Physics ResearchOral
Recently, graph neural networks (GNNs) have been successfully used for a variety of reconstruction problems in HEP. In this work, we develop and evaluate an end-to-end C++ implementation for inferencing a charged particle tracking pipeline based on GNNs. The pipeline steps include data encoding, graph building, edge filtering, GNN and track labeling and it runs on both GPUs and CPUs. The ONNX...
Go to contribution page -
Sophie Berkman (Fermi National Accelerator Laboratory)02/12/2021, 12:20Track 1: Computing Technology for Physics ResearchOral
Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more difficult as the detectors increase in size to reach their physics goals. Liquid argon time projection chamber (LArTPC) neutrino experiments are expected to grow in the next decade to have 100 times more...
Go to contribution page