-
Lucia Silvestris (Universita e INFN, Bari (IT))24/10/2022, 09:00
-
Vito Manzari (INFN - Bari)24/10/2022, 09:02
-
Dr Antonio Marrone (Univ. of Bari), Roberto Bellotti (Dipartimento di Fisica, Università degli Studi di Bari)24/10/2022, 09:07
-
Lucia Silvestris (Universita e INFN, Bari (IT))24/10/2022, 09:12
-
Estela Suarez24/10/2022, 09:30Plenary
The European Processor Initiative (EPI) is an EU-funded project that aims to develop and implement a new family of European processors for high performance computing, artificial intelligence, and a range of emerging application domains. A variety of processor technologies are being implemented as part of EPI. They are divided into two main development lines: the General Purpose Processor (GPP)...
Go to contribution page -
Prof. Rui Li (Deggendorf Institute of Technology)24/10/2022, 10:00Plenary
Transport phenomena remains nowadays the most challenging unsolved problems in computational physics due to the inherent nature of Navier-Stokes equations. As the revolutionary technology, quantum computing opens a grand new perspective for numerical simulations for instance the computational fluid dynamics (CFD). In this plenary talk, starting with an overview of quantum computing including...
Go to contribution page -
Helena Liebelt (Deggendorf Institute of Technology)24/10/2022, 10:30Plenary
The talk provides a short overview of QT history leading up to current times. Lets have a hard look at where we are in terms of QT and what major pitfalls to expect. The presentation will focus particularly on the issue of the growing talent gap.
Go to contribution page -
Michael Boehler (Albert Ludwigs Universitaet Freiburg (DE))24/10/2022, 11:00
The goal of this study is to understand the observed differences in ATLAS software performance, when comparing results measured under ideal laboratory conditions with those from ATLAS computing resources on the Worldwide LHC Computing Grid (WLCG). The laboratory results are based on the full simulation of a single ttbar event and use dedicated, local hardware. In order to have a common and...
Go to contribution page -
Dr Guang Zhao (Institute of High Energy Physics)24/10/2022, 11:00
Ionization of matters by charged particles are the main mechanism for particle identification in gaseous detectors. Traditionally, the ionization is measured by the total energy loss (dE/dx). The concept of cluster counting, which measures the number of clusters per track length (dN/dx), was proposed in the 1970s. The dN/dx measurement can avoid many sources of fluctuations from the dE/dx...
Go to contribution page -
Diego Ciangottini (INFN, Perugia (IT))24/10/2022, 11:00
The challenges expected for the HL-LHC era, both in terms of storage and computing resources, provide LHC experiments with a strong motivation for evaluating ways of re-thinking their computing models at many levels. In fact a big chunk of the R&D efforts of the CMS experiment have been focused on optimizing the computing and storage resource utilization for the data analysis, and Run3 could...
Go to contribution page -
Fabrizio Alfonsi (Universita e INFN, Bologna (IT))24/10/2022, 11:00
The High Energy Physics world will face challenging trigger requests in the next decade. In particular the luminosity increase to 5-7.5 x 1034 cm-2 s-1 at LHC will push the major experiments as ATLAS to exploit the online tracking for their inner detector to reach 10 kHz of events from 1 MHz of Calorimeter and Muon Spectrometer trigger. The project described here is a proposal for a tuned...
Go to contribution page -
Thomas Britton24/10/2022, 11:00
Hydra is an AI system employing off-the-shelf computer vision technologies aimed at autonomously monitoring data quality. Data quality monitoring is an essential step in modern experimentation and Nuclear Physics is no exception. Certain failures can be identified through alarms (e.g. electrical heartbeats) while others are more subtle and often require expert knowledge to identify and...
Go to contribution page -
Biying Hu (Sun Yat-sen University)24/10/2022, 11:00
High energy physics experiments are pushing forward the precision measurements and searching for new physics beyond standard model. It is urgent to simulate and generate mass data to meet requirements from physics. It is one of the most popular areas to make good use of existing power of supercomputers for high energy physics computing. Taking the BESIII experiment as an illustration, we...
Go to contribution page -
Rui Zhang (University of Wisconsin Madison (US))24/10/2022, 11:00
AtlFast3 is the next generation of high precision fast simulation in ATLAS that is being deployed by the collaboration and was successfully used for the simulation of 7 billion events in Run 2 data taking conditions. AtlFast3 combines a parametrization-based approach known as FastCaloSimV2 and a machine-learning based tool that exploits Generative Adversarial Networks (FastCaloGAN) for the...
Go to contribution page -
Antonio Vagnerini (Università di Torino)24/10/2022, 11:00
The inner tracking system of the CMS experiment, consisting of the silicon pixel and strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to perform the primary and secondary vertex reconstruction. The movements of the individual substructures of the tracker detectors are driven by the change in the operating conditions during data taking....
Go to contribution page -
Brunella D'Anzi (Universita e INFN, Bari (IT)), CMS Collaboration24/10/2022, 11:00
Accurate reconstruction of charged particle trajectories and measurement of their parameters (tracking) is one of the major challenges of the CMS experiment. A precise and efficient tracking is one of the critical components of the CMS physics program as it impacts the ability to reconstruct the physics objects needed to understand proton-proton collisions at the LHC. In this work, we present...
Go to contribution page -
CMS collaboration, Marc Huwiler (University of Zurich (CH))24/10/2022, 11:00
Building on top of the multithreading functionality that was introduced in Run-2, the CMS software framework (CMSSW) has been extended in Run-3 to offload part of the physics reconstruction to NVIDIA GPUs. The first application of this new feature is the High Level Trigger (HLT): the new computing farm installed at the beginning of Run-3 is composed of 200 nodes, and for the first time each...
Go to contribution page -
Lukas Alexander Heinrich (CERN)24/10/2022, 11:00
High Energy Physics (HEP) has been using column-wise data stored in synchronized containers, such as most prominently ROOT’s TTree, for decades. These containers have proven to be very powerful as they combine row-wise association capabilities needed by most HEP event processing frameworks (e.g. Athena) with column-wise storage, which typically results in better compression and more efficient...
Go to contribution page -
Stefano Lacaprara (INFN sezione di Padova)24/10/2022, 11:00
The Belle II experiment has been collecting data since 2019 at the second generation e+/e- B-factory SuperKEKB in Tsukuba, Japan. The goal of the experiment is to explore new physics via high precision measurement in flavor physics. This is achieved by collecting a large amount of data that needs to be calibrated promptly for fast reconstruction and recalibrated thoroughly for the final...
Go to contribution page -
Xiaoyu Liu (Central China Normal University CCNU (CN)), Xiaoyu Liu (Institute of High Energy Physics, CAS)24/10/2022, 11:00
Computing in high energy physics is one kind of typical data-intensive applications, especially some data analysis , which require access to a large amount of data. The traditional computing system adopts the "computing-storage" separation mode, which leads to large data volume move during the computing process, and and also increase transmission delay and network load. Therefore, it can...
Go to contribution page -
Claudio Caputo (Universite Catholique de Louvain (UCL) (BE))24/10/2022, 11:00
The outstanding performances obtained by the CMS experiment during Run1 and Run2 represent a great achievement of seamless hardware and software integration. Among the different software parts, the CMS offline reconstruction software is essential for translating the data acquired by the detectors into concrete objects that can be easily handled by the analyzers. The CMS offline reconstruction...
Go to contribution page -
Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas)24/10/2022, 11:00
The landscape of computing power available for the CMS experiment is rapidly evolving, from a scenario dominated by x86 processors deployed at WLCG sites, towards a more diverse mixture of Grid, HPC, and Cloud facilities incorporating a higher fraction of non-CPU components, such as GPUs. Using these facilities’ heterogeneous resources efficiently to process the vast amounts of data to be...
Go to contribution page -
34. Fast track seed selection for track following in the Inner Detector Trigger track reconstructionAndrius Vaitkus (University of London (GB))24/10/2022, 11:00
During ATLAS Run 2, in the online track reconstruction algorithm of the Inner Detector (ID), a large proportion of the CPU time was dedicated to the fast track finding. With the proposed HL-LHC upgrade, where the event pile-up is predicted to reach <μ>=200, track finding will see a further large increase in CPU usage. Moreover, only a small subset of Pixel-only seeds is accepted after the...
Go to contribution page -
William Axel Leight (University of Massachusetts Amherst)24/10/2022, 11:00
The production of simulated datasets for use by physics analyses consumes a large fraction of ATLAS computing resources, a problem that will only get worse as increases in the instantaneous luminosity provided by the LHC lead to more collisions per bunch crossing (pile-up). One of the more resource-intensive steps in the Monte Carlo production is reconstructing the tracks in the ATLAS Inner...
Go to contribution page -
Xiaoyu Liu (IHEP)24/10/2022, 11:00
With the scale and complexity of High Energy Physics(HEP) experiments increase, researchers are facing the challenge of large-scale data processing. In terms of storage, HDFS, a distributed file system that supports the "data-centric" processing model, has been widely used in academia and industry. This file system can support Spark and other distributed data localization calculations,...
Go to contribution page -
Boyang Yu24/10/2022, 11:00
When measuring rare processes at Belle II, a huge luminosity is required, which means a large number of simulations are necessary to determine signal efficiencies and background contributions. However, this process demands high computation costs while most of the simulated data, in particular in case of background, are discarded by the event selection. Thus filters using graph neural networks...
Go to contribution page -
Meinrad Moritz Schefer (Universitaet Bern (CH))24/10/2022, 11:00
The ATLAS detector at CERN measures proton proton collisions at the Large Hadron Collider (LHC) which allows us to test the limits of the Standard Model (SM) of particles physics. Forward moving electrons produced at these collisions are promising candidates for finding physics beyond the SM. However, the ATLAS detector is not construed to measure forward leptons with pseudorapidity $\eta$ of...
Go to contribution page -
Ceyhun Uzunoglu (CERN)24/10/2022, 11:00
As CMS starts the Run 3 data taking, the experiment’s data management software tools along with the monitoring infrastructure have undergone significant upgrades to cope up with the conditions expected in the coming years. The challenges of an efficient, real-time monitoring for the performance of the computing infrastructure or for data distribution are being met using state-of-the-art...
Go to contribution page -
Lia Lavezzi (Universita e INFN Torino (IT))24/10/2022, 11:00
PARSIFAL (PARametrized SImulation) is a software tool originally implemented to reproduce the complete response of a triple-GEM detector to the passage of a charged particle, taking into account the involved physical processes by their simple parametrization and thus in a very fast way.
Go to contribution page
Robust and reliable software, such as GARFIELD++, is widely used to simulate the transport of electrons... -
Farouk Mokhtar (Univ. of California San Diego (US))24/10/2022, 11:00
The particle-flow (PF) algorithm is of central importance to event reconstruction at the CMS detector, and has been a focus of developments in light of planned Phase-2 running conditions with an increased pileup and detector granularity. Current rule-based implementations rely on extrapolating tracks to the calorimeters, correlating them with calorimeter clusters, subtracting charged energy...
Go to contribution page -
Muhammad Imran (National Centre for Physics (PK))24/10/2022, 11:00
Secrets Management is a process where we manage secrets, like certificates, database credentials, tokens, and API keys in a secure and centralized way. In the present CMSWEB (the portfolio of CMS internal IT services) infrastructure, only the operators maintain all services and cluster secrets in a secure place. However, if all relevant persons with secrets are away, then we are left with no...
Go to contribution page -
Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas)24/10/2022, 11:00
The CMS Submission Infrastructure is the main computing resource provisioning system for CMS workflows, including data processing, simulation and analysis. It currently aggregates nearly 400k CPU cores distributed worldwide from Grid, HPC and cloud providers. CMS Tier-0 tasks, such as data repacking and prompt reconstruction, critical for data-taking operations, are executed on a collection of...
Go to contribution page -
Elliott Kauffman (Duke University (US))24/10/2022, 11:00
Over the past several years, a deep learning model based on convolutional neural networks has been developed to find proton-proton collision points (also known as primary vertices, or PVs) in Run 3 LHCb data. By converting the three-dimensional space of particle hits and tracks into a one-dimensional kernel density estimator (KDE) along the direction of the beamline and using the KDE as an...
Go to contribution page -
Ralf Florian Von Cube (KIT - Karlsruhe Institute of Technology (DE))24/10/2022, 11:00
Restarting the LHC again after more than 3 years of shutdown, unprecedented amounts of data are expected to be recorded. Even with the WLCG providing a tremendous amount of compute resources to process this data, local resources will have to be used for additional compute power. This, however, makes the landscape in which computing takes place more heterogeneous.
In this contribution, we...
Go to contribution page -
Stefano Dal Pra (Universita e INFN, Bologna (IT))24/10/2022, 11:00
The INFN-CNAF Tier-1 is engaged for years in a continuous effort to integrate its computing centre with more tipologies of computing resources. In particular, the challenge of providing opportunistic access to nonstandard CPU architectures, such as PowerPC or hardware accelerators (GPUs) has been actively exploited. In this work, we describe a solution to transparently integrate access to...
Go to contribution page -
Sascha Daniel Diefenbacher (Hamburg University (DE))24/10/2022, 11:30
Simulation in High Energy Physics (HEP) places a heavy burden on the available computing resources and is expected to become a major bottleneck for the upcoming high luminosity phase of the LHC and for future Higgs factories, motivating a concerted effort to develop computationally efficient solutions. Methods based on generative machine learning methods hold promise to alleviate the...
Go to contribution page -
Danilo Piparo (CERN)24/10/2022, 12:00Plenary
A bright future awaits particle physics. The LHC Run 3 just started, characterised by the most energetic beams ever created by humankind and the most sophisticated detectors. In the next few years we will accomplish the most precise measurements to challenge our present understanding of nature that will, potentially, lead us to prestigious discoveries. However, Run 3 is just the beginning. A...
Go to contribution page -
Lukas Breitwieser (CERN, ETH Zurich)24/10/2022, 12:30Plenary
Agent-based modeling is a versatile methodology to model complex systems and gain insights into fields as diverse as biology, sociology, economics, finance, and more. However, existing simulation platforms do not always take full advantage of modern hardware and therefore limit the size and complexity of the models that can be simulated.
Go to contribution page
This talk presents the BioDynaMo platform designed to... -
Prof. Davide Pagano (Universita di Brescia (IT))24/10/2022, 14:30
Since the last decade, the so-called Fourth Industrial Revolution is
Go to contribution page
ongoing. It is a profound transformation in industry, where new tech-
nologies such as smart automation, large-scale machine-to-machine com-
munication, and the internet of things are largely changing traditional
manufacturing and industrial practices. The analysis of the huge amount
of data, collected in all modern... -
Evangelos Kourlitis (Argonne National Laboratory (US))24/10/2022, 14:30Track 1: Computing Technology for Physics ResearchOral
The ATLAS experiment at the LHC relies critically on simulated event samples produced by the full Geant4 detector simulation software (FullSim). FullSim was the major CPU consumer during the last data-taking year in 2018 and it is expected to be still significant in the HL-LHC era [1, 2]. In September 2020 ATLAS formed a Geant4 Optimization Task Force to optimize the computational performance...
Go to contribution page -
Andrea Valassi (CERN)24/10/2022, 14:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The matrix element (ME) calculation in any Monte Carlo physics event generator is an ideal fit for implementing data parallelism with lockstep processing on GPUs and on CPU vector registers. For complex physics processes where the ME calculation is the computational bottleneck of event generation workflows, this can lead to very large overall speedups by efficiently exploiting these hardware...
Go to contribution page -
Luca Anzalone (Universita e INFN, Bologna (IT))24/10/2022, 14:50
Signal-background classification is a central problem in High-Energy Physics (HEP), that plays a major role for the discovery of new fundamental particles. The recent Parametric Neural Network (pNN) is able to leverage multiple signal mass hypotheses as an additional input feature to effectively replace a whole set of individual neural classifiers, each providing (in principle) the best...
Go to contribution page -
Rui Wang (Argonne National Laboratory (US)), Taylor Childers (Argonne National Laboratory (US))24/10/2022, 14:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
For more than a decade Monte Carlo (MC) event generators with the current matrix element algorithms have been used for generating hard scattering events on CPU platforms, with excellent flexibility and good efficiency.
Go to contribution page
While the HL-LHC is approaching and precision requirements are becoming more demanding, many studies have been made to solve the bottleneck in the current MC event generator... -
Vito Conforti24/10/2022, 14:50Track 1: Computing Technology for Physics ResearchOral
The ASTRI Mini-Array is a gamma-ray experiment led by Istituto Nazionale di Astrofisica with the partnership of the Instituto de Astrofisica de Canarias, Fundacion Galileo Galilei, Universidade de Sao Paulo (Brazil) and North-West University (South Africa). The ASTRI Mini-Array will consist of nine innovative Imaging Atmospheric Cherenkov Telescopes that are being installed at the Teide...
Go to contribution page -
Humberto Reyes-González (University of Genoa)24/10/2022, 15:10
The publication of full likelihood functions (LFs) of LHC results is vital for a long-lasting and profitable legacy of the LHC. Although major steps have been put forward in this direction, the systematic publication of LFs remains a big challenge in High Energy Physics (HEP) as such distributions are usually quite complex and high-dimensional. Thus, we propose to describe LFs with Normalizing...
Go to contribution page -
Beojan Stanislaus (Lawrence Berkeley National Lab. (US))24/10/2022, 15:10Track 1: Computing Technology for Physics ResearchOral
Experiments at the CERN High-Luminosity Large Hadron Collider (HL-LHC) will produce hundreds of Petabytes of data per year. Efficient processing of this dataset represents a significant human resource and technical challenge. Today, ATLAS data processing applications run in multi-threaded mode, using Intel TBB for thread management, which allows efficient utilization of all available CPU cores...
Go to contribution page -
Max Knobbe24/10/2022, 15:10Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
For more than a decade the current generation of fully automated, matrix element generators has provided hard scattering events with excellent flexibility and good efficiency.
Go to contribution page
However, as recent studies have shown, they are a major bottleneck in the established Monte Carlo event generator toolchains. With the advent of the HL-LHC and ever rising precision requirements, future developments... -
Christian Gutschow (UCL (UK))24/10/2022, 15:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
High-precision calculations are an indispensable ingredient to the success of the LHC physics programme, yet their poor computing efficiency has been a growing cause for concern, threatening to become a paralysing bottleneck in the coming years. We present solutions to eliminate the apprehension by focussing on two major components of general purpose Monte Carlo event generators: The...
Go to contribution page -
Marco Barbone24/10/2022, 15:30Track 1: Computing Technology for Physics ResearchOral
GPU acceleration has been successfully utilised in particle physics for real time analysis and simulation, in this study, we investigate the potential benefits for medical physics applications by analysing performance, development effort, and availability. We selected a software developer with no high performance computing experience to parallelise and accelerate a stand-alone Monte Carlo...
Go to contribution page -
Abhijith Gandrakota (Fermi National Accelerator Lab. (US))24/10/2022, 15:30
We present a novel computational approach for extracting weak signals, whose exact location and width may be unknown, from complex background distributions with an arbitrary functional form. We focus on datasets that can be naturally presented as binned integer counts, demonstrating our approach on the datasets from the Large Hadron Collider. Our approach is based on Gaussian Process (GP)...
Go to contribution page -
Philipp Zehetner (Ludwig Maximilians Universitat (DE))24/10/2022, 15:50
We present an end-to-end reconstruction algorithm to build particle candidates from detector hits in next-generation granular calorimeters similar to that foreseen for the high-luminosity upgrade of the CMS detector. The algorithm exploits a distance-weighted graph neural network, trained with object condensation, a graph segmentation technique. Through a single-shot approach, the...
Go to contribution page -
Gloria Corti (CERN)24/10/2022, 15:50Track 1: Computing Technology for Physics ResearchOral
The LHCb experiment underwent a major upgrade for data taking with higher luminosity in Run 3 of the LHC. New software that exploits modern technologies in the underlying LHCb core software framework, is part of this upgrade. The LHCb simulation framework, Gauss, is adapted accordingly to cope with the increase in the amount of simulated data required for Run 3 analyses. An additional...
Go to contribution page -
Michael Boehler (Albert Ludwigs Universitaet Freiburg (DE))24/10/2022, 16:10
The goal of this study is to understand the observed differences in ATLAS software performance, when comparing results measured under ideal laboratory conditions with those from ATLAS computing resources on the Worldwide LHC Computing Grid (WLCG). The laboratory results are based on the full simulation of a single ttbar event and use dedicated, local hardware. In order to have a common and...
Go to contribution page -
Dr Guang Zhao (Institute of High Energy Physics)24/10/2022, 16:10
Ionization of matters by charged particles are the main mechanism for particle identification in gaseous detectors. Traditionally, the ionization is measured by the total energy loss (dE/dx). The concept of cluster counting, which measures the number of clusters per track length (dN/dx), was proposed in the 1970s. The dN/dx measurement can avoid many sources of fluctuations from the dE/dx...
Go to contribution page -
Diego Ciangottini (INFN, Perugia (IT))24/10/2022, 16:10
The challenges expected for the HL-LHC era, both in terms of storage and computing resources, provide LHC experiments with a strong motivation for evaluating ways of re-thinking their computing models at many levels. In fact a big chunk of the R&D efforts of the CMS experiment have been focused on optimizing the computing and storage resource utilization for the data analysis, and Run3 could...
Go to contribution page -
Fabrizio Alfonsi (Universita e INFN, Bologna (IT))24/10/2022, 16:10
The High Energy Physics world will face challenging trigger requests in the next decade. In particular the luminosity increase to 5-7.5 x 1034 cm-2 s-1 at LHC will push the major experiments as ATLAS to exploit the online tracking for their inner detector to reach 10 kHz of events from 1 MHz of Calorimeter and Muon Spectrometer trigger. The project described here is a proposal for a tuned...
Go to contribution page -
Thomas Britton24/10/2022, 16:10
Hydra is an AI system employing off-the-shelf computer vision technologies aimed at autonomously monitoring data quality. Data quality monitoring is an essential step in modern experimentation and Nuclear Physics is no exception. Certain failures can be identified through alarms (e.g. electrical heartbeats) while others are more subtle and often require expert knowledge to identify and...
Go to contribution page -
Biying Hu (Sun Yat-sen University)24/10/2022, 16:10
High energy physics experiments are pushing forward the precision measurements and searching for new physics beyond standard model. It is urgent to simulate and generate mass data to meet requirements from physics. It is one of the most popular areas to make good use of existing power of supercomputers for high energy physics computing. Taking the BESIII experiment as an illustration, we...
Go to contribution page -
Rui Zhang (University of Wisconsin Madison (US))24/10/2022, 16:10
AtlFast3 is the next generation of high precision fast simulation in ATLAS that is being deployed by the collaboration and was successfully used for the simulation of 7 billion events in Run 2 data taking conditions. AtlFast3 combines a parametrization-based approach known as FastCaloSimV2 and a machine-learning based tool that exploits Generative Adversarial Networks (FastCaloGAN) for the...
Go to contribution page -
Antonio Vagnerini (Università di Torino)24/10/2022, 16:10
The inner tracking system of the CMS experiment, consisting of the silicon pixel and strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to perform the primary and secondary vertex reconstruction. The movements of the individual substructures of the tracker detectors are driven by the change in the operating conditions during data taking....
Go to contribution page -
Brunella D'Anzi (Universita e INFN, Bari (IT)), CMS Collaboration24/10/2022, 16:10
Accurate reconstruction of charged particle trajectories and measurement of their parameters (tracking) is one of the major challenges of the CMS experiment. A precise and efficient tracking is one of the critical components of the CMS physics program as it impacts the ability to reconstruct the physics objects needed to understand proton-proton collisions at the LHC. In this work, we present...
Go to contribution page -
CMS collaboration, Marc Huwiler (University of Zurich (CH))24/10/2022, 16:10
Building on top of the multithreading functionality that was introduced in Run-2, the CMS software framework (CMSSW) has been extended in Run-3 to offload part of the physics reconstruction to NVIDIA GPUs. The first application of this new feature is the High Level Trigger (HLT): the new computing farm installed at the beginning of Run-3 is composed of 200 nodes, and for the first time each...
Go to contribution page -
Lukas Alexander Heinrich (CERN)24/10/2022, 16:10
High Energy Physics (HEP) has been using column-wise data stored in synchronized containers, such as most prominently ROOT’s TTree, for decades. These containers have proven to be very powerful as they combine row-wise association capabilities needed by most HEP event processing frameworks (e.g. Athena) with column-wise storage, which typically results in better compression and more efficient...
Go to contribution page -
Stefano Lacaprara (INFN sezione di Padova)24/10/2022, 16:10
The Belle II experiment has been collecting data since 2019 at the second generation e+/e- B-factory SuperKEKB in Tsukuba, Japan. The goal of the experiment is to explore new physics via high precision measurement in flavor physics. This is achieved by collecting a large amount of data that needs to be calibrated promptly for fast reconstruction and recalibrated thoroughly for the final...
Go to contribution page -
Xiaoyu Liu (Central China Normal University CCNU (CN)), Xiaoyu Liu (Institute of High Energy Physics, CAS)24/10/2022, 16:10
Computing in high energy physics is one kind of typical data-intensive applications, especially some data analysis , which require access to a large amount of data. The traditional computing system adopts the "computing-storage" separation mode, which leads to large data volume move during the computing process, and and also increase transmission delay and network load. Therefore, it can...
Go to contribution page -
Claudio Caputo (Universite Catholique de Louvain (UCL) (BE))24/10/2022, 16:10
The outstanding performances obtained by the CMS experiment during Run1 and Run2 represent a great achievement of seamless hardware and software integration. Among the different software parts, the CMS offline reconstruction software is essential for translating the data acquired by the detectors into concrete objects that can be easily handled by the analyzers. The CMS offline reconstruction...
Go to contribution page -
Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas)24/10/2022, 16:10
The landscape of computing power available for the CMS experiment is rapidly evolving, from a scenario dominated by x86 processors deployed at WLCG sites, towards a more diverse mixture of Grid, HPC, and Cloud facilities incorporating a higher fraction of non-CPU components, such as GPUs. Using these facilities’ heterogeneous resources efficiently to process the vast amounts of data to be...
Go to contribution page -
Andrius Vaitkus (University of London (GB))24/10/2022, 16:10
During ATLAS Run 2, in the online track reconstruction algorithm of the Inner Detector (ID), a large proportion of the CPU time was dedicated to the fast track finding. With the proposed HL-LHC upgrade, where the event pile-up is predicted to reach <μ>=200, track finding will see a further large increase in CPU usage. Moreover, only a small subset of Pixel-only seeds is accepted after the...
Go to contribution page -
William Axel Leight (University of Massachusetts Amherst)24/10/2022, 16:10
The production of simulated datasets for use by physics analyses consumes a large fraction of ATLAS computing resources, a problem that will only get worse as increases in the instantaneous luminosity provided by the LHC lead to more collisions per bunch crossing (pile-up). One of the more resource-intensive steps in the Monte Carlo production is reconstructing the tracks in the ATLAS Inner...
Go to contribution page -
Xiaoyu Liu (IHEP)24/10/2022, 16:10
With the scale and complexity of High Energy Physics(HEP) experiments increase, researchers are facing the challenge of large-scale data processing. In terms of storage, HDFS, a distributed file system that supports the "data-centric" processing model, has been widely used in academia and industry. This file system can support Spark and other distributed data localization calculations,...
Go to contribution page -
Boyang Yu24/10/2022, 16:10
When measuring rare processes at Belle II, a huge luminosity is required, which means a large number of simulations are necessary to determine signal efficiencies and background contributions. However, this process demands high computation costs while most of the simulated data, in particular in case of background, are discarded by the event selection. Thus filters using graph neural networks...
Go to contribution page -
Steffen Stärz (McGill University, (CA))24/10/2022, 16:10
The Phase-II upgrade of the LHC will increase its instantaneous luminosity by a factor of 7 leading to the High Luminosity LHC (HL-LHC). At the HL-LHC, the number of proton-proton collisions in one bunch crossing (called pileup) increases significantly, putting more stringent requirements on the LHC detectors electronics and real-time data processing capabilities.
The ATLAS Liquid Argon...
Go to contribution page -
Meinrad Moritz Schefer (Universitaet Bern (CH))24/10/2022, 16:10
The ATLAS detector at CERN measures proton proton collisions at the Large Hadron Collider (LHC) which allows us to test the limits of the Standard Model (SM) of particles physics. Forward moving electrons produced at these collisions are promising candidates for finding physics beyond the SM. However, the ATLAS detector is not construed to measure forward leptons with pseudorapidity $\eta$ of...
Go to contribution page -
Ceyhun Uzunoglu (CERN)24/10/2022, 16:10
As CMS starts the Run 3 data taking, the experiment’s data management software tools along with the monitoring infrastructure have undergone significant upgrades to cope up with the conditions expected in the coming years. The challenges of an efficient, real-time monitoring for the performance of the computing infrastructure or for data distribution are being met using state-of-the-art...
Go to contribution page -
Lia Lavezzi (Universita e INFN Torino (IT))24/10/2022, 16:10
PARSIFAL (PARametrized SImulation) is a software tool originally implemented to reproduce the complete response of a triple-GEM detector to the passage of a charged particle, taking into account the involved physical processes by their simple parametrization and thus in a very fast way.
Go to contribution page
Robust and reliable software, such as GARFIELD++, is widely used to simulate the transport of electrons... -
Farouk Mokhtar (Univ. of California San Diego (US))24/10/2022, 16:10
The particle-flow (PF) algorithm is of central importance to event reconstruction at the CMS detector, and has been a focus of developments in light of planned Phase-2 running conditions with an increased pileup and detector granularity. Current rule-based implementations rely on extrapolating tracks to the calorimeters, correlating them with calorimeter clusters, subtracting charged energy...
Go to contribution page -
Muhammad Imran (National Centre for Physics (PK))24/10/2022, 16:10
Secrets Management is a process where we manage secrets, like certificates, database credentials, tokens, and API keys in a secure and centralized way. In the present CMSWEB (the portfolio of CMS internal IT services) infrastructure, only the operators maintain all services and cluster secrets in a secure place. However, if all relevant persons with secrets are away, then we are left with no...
Go to contribution page -
Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas)24/10/2022, 16:10
The CMS Submission Infrastructure is the main computing resource provisioning system for CMS workflows, including data processing, simulation and analysis. It currently aggregates nearly 400k CPU cores distributed worldwide from Grid, HPC and cloud providers. CMS Tier-0 tasks, such as data repacking and prompt reconstruction, critical for data-taking operations, are executed on a collection of...
Go to contribution page -
Elliott Kauffman (Duke University (US))24/10/2022, 16:10
Over the past several years, a deep learning model based on convolutional neural networks has been developed to find proton-proton collision points (also known as primary vertices, or PVs) in Run 3 LHCb data. By converting the three-dimensional space of particle hits and tracks into a one-dimensional kernel density estimator (KDE) along the direction of the beamline and using the KDE as an...
Go to contribution page -
Ralf Florian Von Cube (KIT - Karlsruhe Institute of Technology (DE))24/10/2022, 16:10
Restarting the LHC again after more than 3 years of shutdown, unprecedented amounts of data are expected to be recorded. Even with the WLCG providing a tremendous amount of compute resources to process this data, local resources will have to be used for additional compute power. This, however, makes the landscape in which computing takes place more heterogeneous.
In this contribution, we...
Go to contribution page -
Stefano Dal Pra (Universita e INFN, Bologna (IT))24/10/2022, 16:10
The INFN-CNAF Tier-1 is engaged for years in a continuous effort to integrate its computing centre with more tipologies of computing resources. In particular, the challenge of providing opportunistic access to nonstandard CPU architectures, such as PowerPC or hardware accelerators (GPUs) has been actively exploited. In this work, we describe a solution to transparently integrate access to...
Go to contribution page -
Tony Di Pilato (CASUS - Center for Advanced Systems Understanding (DE))24/10/2022, 16:40
CLUE (CLUsters of Energy) is a fast, fully-parallelizable clustering algorithm developed to optimize such a crucial step in the event reconstruction chain of future high granularity calorimeters. The main drawback of having an unprecedentedly high segmentation in this kind of detectors is a huge computation load that, in case of the CMS, must be reduced to fit the harsh requirements of the...
Go to contribution page -
Gabor Biro (Wigner Research Centre for Physics (Wigner RCP) (HU))24/10/2022, 16:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Hadronization is a non-perturbative process, which theoretical description can not be deduced from first principles. Modeling hadron formation requires several assumptions and various phenomenological approaches. Utilizing state-of-the-art Computer Vision and Deep Learning algorithms, it is eventually possible to train neural networks to learn non-linear and non-perturbative features of the...
Go to contribution page -
Domenico Giordano (CERN)24/10/2022, 16:40Track 1: Computing Technology for Physics ResearchOral
HEPscore is a CPU benchmark, based on HEP applications, that the HEPiX Working Group is proposing as a replacement of the currently used HEPSpec06 benchmark, adopted in WLCG for procurement, computing resource pledges and performance studies.
Go to contribution page
In 2019, we presented at ACAT the motivations for building a benchmark for the HEP community based on HEP applications. The process from the conception... -
Marta Bertran Ferrer (CERN)24/10/2022, 17:00Track 1: Computing Technology for Physics ResearchOral
During the LHC LS2, the ALICE experiment has undergone a major upgrade of the data acquisition model, evolving from a trigger-based model to a continuous readout. The upgrade allows for an increase in the number of recorded events by a factor of 100 and in the volume of generated data by a factor of 10. The entire experiment software stack has been completely redesigned and rewritten to adapt...
Go to contribution page -
Theo Heimel (Heidelberg University)24/10/2022, 17:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
For many years, the matrix element method has been considered the perfect approach to LHC inference. We show how conditional invertible neural networks can be used to unfold detector effects and initial-state QCD radiation, to provide the hard-scattering information for this method. We illustrate our approach for the CP-violating phase of the top Yukawa coupling in associated Higgs and...
Go to contribution page -
Ouail Kitouni (Massachusetts Inst. of Technology (US))24/10/2022, 17:00
We propose a novel neural architecture that enforces an upper bound on the Lipschitz constant of the neural network (by constraining the norm of its gradient with respect to the inputs). This architecture was useful in developing new algorithms for the LHCb trigger which have robustness guarantees as well as powerful inductive biases leveraging the neural network’s ability to be monotonic in...
Go to contribution page -
Elton Shumka (University of Sofia - St. Kliment Ohridski (BG))24/10/2022, 17:20Track 1: Computing Technology for Physics ResearchOral
The CMS experiment has 1056 Resistive Plate Chambers (RPCs) in its muon system. Monitoring their currents is the first essential step towards maintaining the stability of the CMS RPC detector performance. An automated monitoring tool to carry out this task has been developed. It utilises the ability of Machine Learning (ML) methods in the modelling of the behavior of the current of these...
Go to contribution page -
Sitian Qian (Peking University (CN))24/10/2022, 17:20
Jet tagging is a critical yet challenging classification task in particle physics. While deep learning has transformed jet tagging and significantly improved performance, the lack of a large-scale public dataset impedes further enhancement. In this work, we present JetClass, a new comprehensive dataset for jet tagging. The JetClass dataset consists of 100 M jets, about two orders of magnitude...
Go to contribution page -
Domenico Pomarico (INFN Sezione di Bari)24/10/2022, 17:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Learning tasks are implemented via mappings of the sampled data set, including both the classical and the quantum framework. The quantum-inspired approach mimics the support vector machine mapping in a high-dimensional feature space, yielded by the qubit encoding. In our application such scheme is framed in the formulation of a least-squares problem for the minimization of the mean squared...
Go to contribution page -
Mr Ilyes Batatia (Engineering Laboratory, University of Cambridge), Mr Jose M Munoz (EIA University)24/10/2022, 17:40
*Besides modern architectures designed via geometric deep learning achieving high accuracies via Lorentz group invariance, this process involves high amounts of computation. Moreover, the framework is restricted to a particular classification scheme and lacks interpretability.
Go to contribution page
To tackle this issue, we present BIP, an efficient and computationally cheap framework to build rotational,... -
michael goodrich24/10/2022, 17:40Track 1: Computing Technology for Physics ResearchOral
To increase the science rate for high data rates/volumes, JLab is partnering with ESnet for development of an AI/ML directed dynamic Compute Work Load Balancer (CWLB) of UDP streamed data. The CWLB is an FPGA featuring dynamically configurable, low fixed latency, destination switching and high throughput. The CLWB effectively provides seamless integration of edge / core computing to support...
Go to contribution page -
Andrea Luiselli (Intel Corporation Italia SPA (IT))24/10/2022, 18:30Plenary
-
Axel Naumann (CERN), Lucia Silvestris (Universita e INFN, Bari (IT))25/10/2022, 09:15
-
Gregor Kasieczka (Hamburg University (DE))25/10/2022, 09:30Plenary
As the search for new fundamental phenomena at modern particle colliders is a complex and multifaceted task dealing with high-dimensional data, it is not surprising that machine learning based techniques are quickly becoming a widely used tool for many aspects of searches. On the one hand, classical strategies are being supercharged by ever more sophisticated tagging algorithms; on the other...
Go to contribution page -
Jungsang Kim25/10/2022, 10:00Plenary
Trapped ion is the leading candidate for realizing practically useful quantum computers, as the system features highest performance quantum computational operations. Introduction of advanced integration technologies has provided an opportunity to convert a complex atomic physics experiment into a stand-alone programmable quantum computer. In this talk, I will discuss recent technological...
Go to contribution page -
Raja Appuswamy (Eurecom)25/10/2022, 10:30Plenary
Today, we live in a data-driven society. For decades, we wanted fast storage devices that can quickly deliver data, and storage technologies evolved to meet this requirement. As data-driven decision making becomes an integral part of enterprises, we are increasingly faced with a new need-–one for cheap, long-term storage devices that can safely store the data we generate for tens or hundreds...
Go to contribution page -
Jacopo Cerasoli (CNRS - IPHC)25/10/2022, 11:00
Over the past few years, intriguing deviations from the Standard Model predictions have been reported in measurements of angular observables and branching fractions of $B$ meson decays, suggesting the existence of a new interaction that acts differently on the three lepton families. The Belle II experiment has unique features that allow to study $B$ meson decays with invisible particles in the...
Go to contribution page -
Zhijun Li (Sun Yat-Sen University (CN))25/10/2022, 11:00
Detector modeling and visualization are essential in the life cycle of a High Energy Physics (HEP) experiment. Unity is a professional multi-media creation software that has the advantages of rich visualization effects and easy deployment on various platforms. In this work, we applied the method of detector transformation to convert the BESIII detector description from the offline software...
Go to contribution page -
Ianna Osborne (Princeton University)25/10/2022, 11:00
Awkward Arrays and RDataFrame provide two very different ways of performing calculations at scale. By adding the ability to zero-copy convert between them, users get the best of both. It gives users a better flexibility in mixing different packages and languages in their analysis.
In Awkward Array version 2, the ak.to_rdataframe function presents a view of an Awkward Array as an RDataFrame...
Go to contribution page -
Andrii Verbytskyi (Max Planck Society (DE))25/10/2022, 11:00
We present a revived version of the CERNLIB, the basis for software
ecosystems of most of the pre-LHC HEP experiments. The efforts to
consolidate the CERNLIB are part of the activities of the Data Preservation
for High Energy Physics collaboration to preserve data and software of
the past HEP experiments.The presented version is based on the CERNLIB version 2006 with numerous...
Go to contribution page -
Simon Akar (University of Cincinnati (US))25/10/2022, 11:00
Identifying and locating proton-proton collisions in LHC experiments (known as primary vertices or PVs) has been the topic of numerous conference talks in the past few years (2019-2021). Efforts to search for a variety of potential architectures have yielded potential candidates for PV-finder. The UNet model, for example, has achieved an efficiency of 98% with a low false-positive rate. These...
Go to contribution page -
Dennis Klein (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE)), Dr Christian Tacke (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))25/10/2022, 11:00
The FairRoot software stack is a toolset for the simulation, reconstruction, and analysis of high energy particle physics experiments (currently used i.e. at FAIR/GSI, and CERN). In this work we give insight into recent improvements of Continuous Integration (CI) for this software stack. CI is a modern software engineering method to efficiently assure software quality. We discuss relevant...
Go to contribution page -
Rahul Chauhan (CERN)25/10/2022, 11:00
After a successful adoption of Rucio following its inception in 2018 as the new data management system, a subsequent step is to advertise this to the users among other stakeholders. In this perspective, one of the objectives is to keep improving the tooling around Rucio. As Rucio introduces a new data management paradigm w.r.t the previous model, we begin by tackling the challenges arising...
Go to contribution page -
Kaixuan Huang25/10/2022, 11:00
In High Energy Physics (HEP) experiment, Data Quality Monitoring (DQM) system is crucial to ensure the correct and smooth operation of the experimental apparatus during the data taking. DQM at Jiangmen Underground Neutrino Observatory (JUNO) will reconstruct raw data directly from JUNO Data Acquisition (DAQ) system and use event visualization tools to show the detector performance for high...
Go to contribution page -
Alexey Rybalchenko (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))25/10/2022, 11:00
The common ALICE-FAIR software framework ALFA offers a platform for simulation, reconstruction and analysis of particle physics experiments. FairMQ is a module of ALFA that provides building blocks for distributed data processing pipelines, composed out of components communicating via message passing. FairMQ integrates and efficiently utilizes standard industry data transport technologies,...
Go to contribution page -
Irene Andreou, Noam Mouelle (Imperial College London)25/10/2022, 11:00
We evaluate two Generative Adversarial Network (GAN) models developed by the COherent Muon to Electron Transition (COMET) collaboration to generate sequences of particle hits in a Cylindrical Drift Chamber (CDC). The models are first evaluated by measuring the similarity between distributions of particle-level, physical features. We then measure the Effectively Unbiased Fréchet Inception...
Go to contribution page -
Namitha Chithirasreemadam (University of Pisa)25/10/2022, 11:00
The [Mu2e][1] experiment will search for the CLFV neutrinoless coherent conversion of muon to electron, in the field of an Aluminium nucleus. A custom offline event display has been developed for Mu2e using [TEve][2], a ROOT based 3-D event visualisation framework. Event displays are crucial for monitoring and debugging during live data taking as well as for public outreach. A custom GUI...
Go to contribution page -
Aurora Perego (Universita & INFN, Milano-Bicocca (IT))25/10/2022, 11:00
The CMS software framework (CMSSW) has been recently extended to perform part of the physics reconstruction with NVIDIA GPUs. To avoid writing a different implementations of the code for each back-end the decision was to use a performance portability library and so Alpaka has been chosen as the solution for Run-3.
Go to contribution page
In the meantime different studies have been performed to test the track... -
Stefan Rua (Aalto University)25/10/2022, 11:00
The CMS collaboration has a growing interest in the use of heterogeneous computing and accelerators to reduce the costs and improve the efficiency of the online and offline data processing: online, the High Level Trigger is fully equipped with NVIDIA GPUs; offline, a growing fraction of the computing power is coming from GPU-equipped HPC centres. One of the topics where accelerators could be...
Go to contribution page -
Dalila Salamani (CERN)25/10/2022, 11:00
Description of development of cascades of particles in a calorimeter of a high energy physics experiment relies on precise simulation of particle interactions with matter. It is inherently slow and constitutes a challenge for HEP experiments. Furthermore, with the upcoming high luminosity upgrade of the Large Hadron Collider and a much increased data production rate, the amount of required...
Go to contribution page -
Eric Cano (CERN)25/10/2022, 11:00
GPU applications require a structure of array (SoA) layout for the data to achieve good memory access performance. During the development of the CMS Pixel reconstruction for GPUs, the Patatrack developers crafted various techniques to optimise the data placement in memory and its access inside GPU kernels. The work presented here gathers, automates and extends those patterns, and offers a...
Go to contribution page -
Annika Stein (Rheinisch Westfaelische Tech. Hoch. (DE)), Spandan Mondal (RWTH Aachen (DE))25/10/2022, 11:00
In the field of high-energy physics, deep learning algorithms continue to gain in relevance and provide performance improvements over traditional methods, for example when identifying rare signals or finding complex patterns. From an analyst’s perspective, obtaining highest possible performance is desirable, but recently, some focus has been laid on studying robustness of models to investigate...
Go to contribution page -
Benno Kach (Deutsches Elektronen-Synchrotron (DE))25/10/2022, 11:00
In this study, jets with up to 30 particles are modelled using Normalizing Flows with Rational Quadratic Spline coupling layers. The invariant mass of the jet is a powerful global feature to control whether the flow-generated data contains the same high-level correlations as the training data. The use of normalizing flows without conditioning shows that they lack the expressive power to do...
Go to contribution page -
Rosamaria Venditti (Universita e INFN, Bari (IT))25/10/2022, 11:00
The CMS experiment employs an extensive data quality monitoring (DQM) and data certification (DC) procedure. Currently, this approach consists mainly of the visual inspection of reference histograms which summarize the status and performance of the detector. Recent developments in several of the CMS subsystems have shown the potential of computer-assisted DQM and DC using autoencoders,...
Go to contribution page -
Wuming Luo (Institute of High Energy Physics, Chinese Academy of Science)25/10/2022, 11:00
Jiangmen Underground Neutrino Observatory (JUNO), located at the southern part of China, will be the world’s largest liquid scintillator(LS) detector. Equipped with 20 kton LS, 17623 20-inch PMTs and 25600 3-inch PMTs in the central detector, JUNO will provide a unique apparatus to probe the mysteries of neutrinos, particularly the neutrino mass ordering puzzle. One of the challenges for JUNO...
Go to contribution page -
Felice Pantaleo (CERN)25/10/2022, 11:00
The Particle Flow (PF) algorithm, used for a majority of CMS data analyses for event reconstruction, provides a comprehensive list of final-state state particle candidates and enables efficient identification and mitigation methods for simultaneous proton-proton collisions (pileup). The higher instantaneous luminosity expected during the upcoming LHC Run 3 will impose challenges for CMS event...
Go to contribution page -
Oscar Roberto Chaparro Amaro (Instituto Politécnico Nacional. Centro de Investigación en Computación)25/10/2022, 11:00
Density Functional Theory (DFT) is an extended ab initio method used for calculating the electronic properties of molecules. Considering Hartree Fock methods, the DFT offers appropriate approximations regarding the time calculations. Recently, the DFT method has been used for discovering and analyzing protein interactions by means of calculating the free energies of these macro-molecules from...
Go to contribution page -
Daniele Spiga (Universita e INFN, Perugia (IT))25/10/2022, 11:00
Computing resources in the Worldwide LHC Computing Grid (WLCG) have been based entirely on the x86 architecture for more than two decades. In the near future, however, heterogeneous non-x86 resources, such as ARM, POWER and Risc-V, will become a substantial fraction of the resources that will be provided to the LHC experiments, due to their presence in existing and planned world-class HPC...
Go to contribution page -
Danilo Piparo (CERN)25/10/2022, 11:00
The Phase-2 upgrade of CMS, coupled with the projected performance of the HL-LHC, shows great promise in terms of discovery potential. However, the increased granularity of the CMS detector and the higher complexity of the collision events generated by the accelerator pose challenges in the areas of data acquisition, processing, simulation, and analysis. These challenges cannot be solved...
Go to contribution page -
Elias Leutgeb (Technische Universitaet Wien (AT))25/10/2022, 11:00
The CMS Level-1 Trigger, for its operation during Phase-2 of LHC, will undergo a significant upgrade and redesign. The new trigger system, based on multiple families of custom boards, equipped with Xilinx Ultrascale Plus FPGAs and interconnected with high speed optical links at 25 Gb/s, will exploit more detailed information from the detector subsystems (calorimeter, muon systems, tracker). In...
Go to contribution page -
John Lawrence (University of Notre Dame (US))25/10/2022, 11:00
With the start of run 3 in 2022, the LHC has entered a new period, now delivering higher energy and luminosity proton beams to the Compact Muon Solenoid (CMS) experiment. These increases make it critical to maintain and upgrade the tools and methods used to monitor the rate at which data is collected (the trigger rate). Software tools have been developed to allow for automated rate monitoring,...
Go to contribution page -
Bernhard Manfred Gruber (Technische Universitaet Dresden (DE))25/10/2022, 11:00
Choosing the best memory layout for each hardware architecture is increasingly important as more and more programs become memory bound. For portable codes that run across heterogeneous hardware architectures, the choice of the memory layout for data structures is ideally decoupled from the rest of a program.
Go to contribution page
The low-level abstraction of memory access (LLAMA) is a C++ library that provides a... -
Giulia Lavizzari25/10/2022, 11:00
We present a machine-learning based method to detect deviations from a reference model, in an almost independent way with respect to the theory assumed to describe the new physics responsible for the discrepancies.
The analysis is based on an Effective Field Theory (EFT) approach: under this hypothesis the Lagrangian of the system can be written as an infinite expansion of terms, where the...
Go to contribution page -
Moritz David Bauer25/10/2022, 11:00
The Belle II experiment at the second generation e+/e- B-factory SuperKEKB has been collecting data since 2019 and aims to accumulate 50 times more data than the first generation experiment, Belle.
Go to contribution page
To efficiently process these steadily growing datasets of recorded and
simulated data that end up on the order of 100 PB and to support
Grid-based analysis workflows using the DIRAC Workload... -
Enrico Bothmann (University of Göttingen)25/10/2022, 11:30Plenary
Simulated event samples from Monte-Carlo event generators (MCEGs) are a backbone of the LHC physics programme.
Go to contribution page
However, for Run III, and in particular for the HL-LHC era, computing budgets are becoming increasingly constrained, while at the same time the push to higher accuracies
is making event generation significantly more expensive.
Modern ML techniques can help with the effort of... -
Ana Lucia Varbanescu25/10/2022, 12:00Plenary
“Computation” has become a massive part of our daily lives; even more so, in science, a lot of experiments and analysis rely on massive computation. Under the assumption that computation is cheap, and time-to-result is the only relevant metric for all of us, we currently use computational resources at record-low efficiency.
Go to contribution page
In this talk, I argue this approach is an unacceptable waste of... -
Ying CHEN25/10/2022, 12:30Plenary
Lattice QCD is ab initio approach for QCD and plays an indispensable role in understanding the low energy properties of the strong interaction. Last four decades have witnessed the rapid development of the lattice QCD numerical calculation along with the progress of the high performance computing (HPC) techniques. Lattice QCD becomes one of the most resource-consuming HPC fields. China has...
Go to contribution page -
Daniele Cesini (Universita e INFN, Bologna (IT)), Luca dell'Agnello (INFN), Dr Tommaso Boccali (INFN Sezione di Pisa)25/10/2022, 14:30Track 1: Computing Technology for Physics ResearchOral
The INFN Tier1 data center is currently located in the premises of the Physics Department of the University of Bologna, where CNAF is also located. Soon it will be moved to the “Tecnopolo”, the new facility for research, innovation, and technological development in the same city area; it will follow the installation of Leonardo, the pre-exascale supercomputing machine managed by CINECA,...
Go to contribution page -
Dr Elise de Doncker (Western Michigan University)25/10/2022, 14:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The computation of loop integrals is required in high energy physics to account for higher-order corrections of the interaction cross section in perturbative quantum field theory. Depending on internal masses and external momenta, loop integrals may suffer from singularities where the integrand denominator vanishes at the boundaries, and/or in the interior of the integration domain (for...
Go to contribution page -
Yao Zhang25/10/2022, 14:30
Track fitting and track hit classification are highly relevant, hence these two approaches could benefit each other. For example, if we know the underlying parameters of a track, then track hits associated with the track can be easily identified. On the other hand, if we know the hits of a track, then we can get underlying parameters by fitting them. Most existing works take the second scheme...
Go to contribution page -
Nicola Mori (INFN Florence)25/10/2022, 14:50Track 1: Computing Technology for Physics ResearchOral
The HERD experiment will perform direct cosmic-ray detection at the highest ever reached energies, thanks to an innovative design that maximizes the acceptance, and its placement on the future Chinese Space Station which will allow for an extended observation period."
Significant computing and storage resources are foreseen to be needed in order to cope with the necessities of a large...
Go to contribution page -
Ryan Liu (University of California, Berkeley)25/10/2022, 14:50
Graph Neural Networks (GNN) have recently attained competitive particle track reconstruction performance compared to traditional approaches such as combinatorial Kalman filters. In this work, we implement a version of Hierarchical Graph Neural Networks (HGNN) for track reconstruction, which creates the hierarchy dynamically. The HGNN creates “supernodes” by pooling nodes into clusters, and...
Go to contribution page -
Anja Butter25/10/2022, 14:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Evaluating loop amplitudes is a time-consuming part of LHC event generation. For di-photon production with jets we show that simple, Bayesian networks can learn such amplitudes and model their uncertainties reliably. A boosted training of the Bayesian network further improves the uncertainty estimate and the network precision in critical phase space regions. In general, boosted network...
Go to contribution page -
HENRY TRUONG25/10/2022, 15:10Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Evaluation of one-loop matrix elements is computationally expensive and makes up a large proportion of time during event generation. We present a neural network emulator that builds in the factorisation properties of matrix elements which accurately reproduces the NLO k-factors for electron-position annihilation into up to 5 jets.
We show that our emulator retains good performance for high...
Go to contribution page -
Mr Tim Schwägerl (Humboldt University of Berlin and DESY (DE))25/10/2022, 15:10
Particle track reconstruction poses a key computing challenge for future collider experiments. Quantum computing carries the potential for exponential speedups and the rapid progress in quantum hardware might make it possible to address the problem of particle tracking in the near future. The solution of the tracking problem can be encoded in the ground state of a Quadratic Unconstrained...
Go to contribution page -
Gioacchino Vino (INFN Bari (IT))25/10/2022, 15:10Track 1: Computing Technology for Physics ResearchOral
The ReCaS-Bari datacenter enriches its service portfolio providing a new HPC/GPU cluster for Bari University and INFN users. This new service is the best solution for complex applications requiring a massively parallel processing architecture. The cluster is equipped with cutting edge Nvidia GPUs, like V100 and A100, suitable for those applications able to use all the available parallel...
Go to contribution page -
Barry Dillon (University of Heidelberg)25/10/2022, 15:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In this talk I will give an overview of our recent progress in developing anomaly detection methods for finding new physics at the LHC. I will discuss how we define anomalies in this context, and the deep learning tools that we can use to find them. I will also discuss how self-supervised representation learning techniques can be used to enhance anomaly detection methods.
Go to contribution page -
Emanuele Simili25/10/2022, 15:30Track 1: Computing Technology for Physics ResearchOral
The power consumption of computing is coming under intense scrutiny worldwide, driven both by concerns about the carbon footprint, and by rapidly rising energy costs.
Go to contribution page
ARM chips, widely used in mobile devices due to their power efficiency, are not currently in widespread use as capacity hardware on the Worldwide LHC Computing Grid.
However, the LHC experiments are increasingly able to... -
Arthur Hennequin (Massachusetts Inst. of Technology (US))25/10/2022, 15:30
As part of the Run 3 upgrade, the LHCb experiment has switched to a two stage event trigger, fully implemented in software. The first stage of this trigger, running in real time at the collision rate of 30MHz, is entirely implemented on commercial off-the-shelf GPUs and performs a partial reconstruction of the events.
Go to contribution page
We developed a novel strategy for this reconstruction, starting with two... -
Mr Zeno Capatti (ETH Zürich)25/10/2022, 15:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Local Unitarity provides an order-by-order representation of perturbative cross-sections that realises at the local level the cancellation of final-state collinear and soft singularities predicted by the KLN theorem. The representation is obtained by manipulating the real and virtual interference diagrams contributing to transition probabilities using general local identities. As a...
Go to contribution page -
Andreas Salzburger (CERN), Beomki Yeo, Joana Niermann (Georg August Universitaet Goettingen (DE))25/10/2022, 15:50
The use of hardware acceleration, particularly of GPGPUs is one promising strategy for coping with the computing demands in the upcoming high luminosity era of the LHC and beyond. Track reconstruction, in particular, suffers from exploding combinatorics and thus could greatly profit from the massively parallel nature of GPGPUs and other accelerators. However, classical pattern recognition...
Go to contribution page -
Jacopo Cerasoli (CNRS - IPHC)25/10/2022, 16:10
Over the past few years, intriguing deviations from the Standard Model predictions have been reported in measurements of angular observables and branching fractions of $B$ meson decays, suggesting the existence of a new interaction that acts differently on the three lepton families. The Belle II experiment has unique features that allow to study $B$ meson decays with invisible particles in the...
Go to contribution page -
Zhijun Li (Sun Yat-Sen University (CN))25/10/2022, 16:10
Detector modeling and visualization are essential in the life cycle of a High Energy Physics (HEP) experiment. Unity is a professional multi-media creation software that has the advantages of rich visualization effects and easy deployment on various platforms. In this work, we applied the method of detector transformation to convert the BESIII detector description from the offline software...
Go to contribution page -
Garima Singh (Princeton University (US))25/10/2022, 16:10
RooFit is a toolkit for statistical modeling and fitting used by most experiments in particle physics. Just as data sets from next-generation experiments grow, processing requirements for physics analysis become more computationally demanding, necessitating performance optimizations for RooFit. One possibility to speed-up minimization and add stability is the use of automatic differentiation...
Go to contribution page -
Ianna Osborne (Princeton University)25/10/2022, 16:10
Awkward Arrays and RDataFrame provide two very different ways of performing calculations at scale. By adding the ability to zero-copy convert between them, users get the best of both. It gives users a better flexibility in mixing different packages and languages in their analysis.
In Awkward Array version 2, the ak.to_rdataframe function presents a view of an Awkward Array as an RDataFrame...
Go to contribution page -
Andrii Verbytskyi (Max Planck Society (DE))25/10/2022, 16:10
We present a revived version of the CERNLIB, the basis for software
ecosystems of most of the pre-LHC HEP experiments. The efforts to
consolidate the CERNLIB are part of the activities of the Data Preservation
for High Energy Physics collaboration to preserve data and software of
the past HEP experiments.The presented version is based on the CERNLIB version 2006 with numerous...
Go to contribution page -
Simon Akar (University of Cincinnati (US))25/10/2022, 16:10
Identifying and locating proton-proton collisions in LHC experiments (known as primary vertices or PVs) has been the topic of numerous conference talks in the past few years (2019-2021). Efforts to search for a variety of potential architectures have yielded potential candidates for PV-finder. The UNet model, for example, has achieved an efficiency of 98% with a low false-positive rate. These...
Go to contribution page -
Dennis Klein (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE)), Dr Christian Tacke (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))25/10/2022, 16:10
The FairRoot software stack is a toolset for the simulation, reconstruction, and analysis of high energy particle physics experiments (currently used i.e. at FAIR/GSI, and CERN). In this work we give insight into recent improvements of Continuous Integration (CI) for this software stack. CI is a modern software engineering method to efficiently assure software quality. We discuss relevant...
Go to contribution page -
Rahul Chauhan (CERN)25/10/2022, 16:10
After a successful adoption of Rucio following its inception in 2018 as the new data management system, a subsequent step is to advertise this to the users among other stakeholders. In this perspective, one of the objectives is to keep improving the tooling around Rucio. As Rucio introduces a new data management paradigm w.r.t the previous model, we begin by tackling the challenges arising...
Go to contribution page -
Kaixuan Huang25/10/2022, 16:10
In High Energy Physics (HEP) experiment, Data Quality Monitoring (DQM) system is crucial to ensure the correct and smooth operation of the experimental apparatus during the data taking. DQM at Jiangmen Underground Neutrino Observatory (JUNO) will reconstruct raw data directly from JUNO Data Acquisition (DAQ) system and use event visualization tools to show the detector performance for high...
Go to contribution page -
Alexey Rybalchenko (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))25/10/2022, 16:10
The common ALICE-FAIR software framework ALFA offers a platform for simulation, reconstruction and analysis of particle physics experiments. FairMQ is a module of ALFA that provides building blocks for distributed data processing pipelines, composed out of components communicating via message passing. FairMQ integrates and efficiently utilizes standard industry data transport technologies,...
Go to contribution page -
Irene Andreou, Noam Mouelle (Imperial College London)25/10/2022, 16:10
We evaluate two Generative Adversarial Network (GAN) models developed by the COherent Muon to Electron Transition (COMET) collaboration to generate sequences of particle hits in a Cylindrical Drift Chamber (CDC). The models are first evaluated by measuring the similarity between distributions of particle-level, physical features. We then measure the Effectively Unbiased Fréchet Inception...
Go to contribution page -
Namitha Chithirasreemadam (University of Pisa)25/10/2022, 16:10
The [Mu2e][1] experiment will search for the CLFV neutrinoless coherent conversion of muon to electron, in the field of an Aluminium nucleus. A custom offline event display has been developed for Mu2e using [TEve][2], a ROOT based 3-D event visualisation framework. Event displays are crucial for monitoring and debugging during live data taking as well as for public outreach. A custom GUI...
Go to contribution page -
Aurora Perego (Universita & INFN, Milano-Bicocca (IT))25/10/2022, 16:10
The CMS software framework (CMSSW) has been recently extended to perform part of the physics reconstruction with NVIDIA GPUs. To avoid writing a different implementations of the code for each back-end the decision was to use a performance portability library and so Alpaka has been chosen as the solution for Run-3.
Go to contribution page
In the meantime different studies have been performed to test the track... -
Stefan Rua (Aalto University)25/10/2022, 16:10
The CMS collaboration has a growing interest in the use of heterogeneous computing and accelerators to reduce the costs and improve the efficiency of the online and offline data processing: online, the High Level Trigger is fully equipped with NVIDIA GPUs; offline, a growing fraction of the computing power is coming from GPU-equipped HPC centres. One of the topics where accelerators could be...
Go to contribution page -
Dalila Salamani (CERN)25/10/2022, 16:10
Description of development of cascades of particles in a calorimeter of a high energy physics experiment relies on precise simulation of particle interactions with matter. It is inherently slow and constitutes a challenge for HEP experiments. Furthermore, with the upcoming high luminosity upgrade of the Large Hadron Collider and a much increased data production rate, the amount of required...
Go to contribution page -
Eric Cano (CERN)25/10/2022, 16:10
GPU applications require a structure of array (SoA) layout for the data to achieve good memory access performance. During the development of the CMS Pixel reconstruction for GPUs, the Patatrack developers crafted various techniques to optimise the data placement in memory and its access inside GPU kernels. The work presented here gathers, automates and extends those patterns, and offers a...
Go to contribution page -
Annika Stein (Rheinisch Westfaelische Tech. Hoch. (DE)), Spandan Mondal (RWTH Aachen (DE))25/10/2022, 16:10
In the field of high-energy physics, deep learning algorithms continue to gain in relevance and provide performance improvements over traditional methods, for example when identifying rare signals or finding complex patterns. From an analyst’s perspective, obtaining highest possible performance is desirable, but recently, some focus has been laid on studying robustness of models to investigate...
Go to contribution page -
Benno Kach (Deutsches Elektronen-Synchrotron (DE))25/10/2022, 16:10
In this study, jets with up to 30 particles are modelled using Normalizing Flows with Rational Quadratic Spline coupling layers. The invariant mass of the jet is a powerful global feature to control whether the flow-generated data contains the same high-level correlations as the training data. The use of normalizing flows without conditioning shows that they lack the expressive power to do...
Go to contribution page -
Rosamaria Venditti (Universita e INFN, Bari (IT))25/10/2022, 16:10
The CMS experiment employs an extensive data quality monitoring (DQM) and data certification (DC) procedure. Currently, this approach consists mainly of the visual inspection of reference histograms which summarize the status and performance of the detector. Recent developments in several of the CMS subsystems have shown the potential of computer-assisted DQM and DC using autoencoders,...
Go to contribution page -
Wuming Luo (Institute of High Energy Physics, Chinese Academy of Science)25/10/2022, 16:10
Jiangmen Underground Neutrino Observatory (JUNO), located at the southern part of China, will be the world’s largest liquid scintillator(LS) detector. Equipped with 20 kton LS, 17623 20-inch PMTs and 25600 3-inch PMTs in the central detector, JUNO will provide a unique apparatus to probe the mysteries of neutrinos, particularly the neutrino mass ordering puzzle. One of the challenges for JUNO...
Go to contribution page -
Felice Pantaleo (CERN)25/10/2022, 16:10
The Particle Flow (PF) algorithm, used for a majority of CMS data analyses for event reconstruction, provides a comprehensive list of final-state state particle candidates and enables efficient identification and mitigation methods for simultaneous proton-proton collisions (pileup). The higher instantaneous luminosity expected during the upcoming LHC Run 3 will impose challenges for CMS event...
Go to contribution page -
Oscar Roberto Chaparro Amaro (Instituto Politécnico Nacional. Centro de Investigación en Computación)25/10/2022, 16:10
Density Functional Theory (DFT) is an extended ab initio method used for calculating the electronic properties of molecules. Considering Hartree Fock methods, the DFT offers appropriate approximations regarding the time calculations. Recently, the DFT method has been used for discovering and analyzing protein interactions by means of calculating the free energies of these macro-molecules from...
Go to contribution page -
Daniele Spiga (Universita e INFN, Perugia (IT))25/10/2022, 16:10
Computing resources in the Worldwide LHC Computing Grid (WLCG) have been based entirely on the x86 architecture for more than two decades. In the near future, however, heterogeneous non-x86 resources, such as ARM, POWER and Risc-V, will become a substantial fraction of the resources that will be provided to the LHC experiments, due to their presence in existing and planned world-class HPC...
Go to contribution page -
Elias Leutgeb (Technische Universitaet Wien (AT))25/10/2022, 16:10
The CMS Level-1 Trigger, for its operation during Phase-2 of LHC, will undergo a significant upgrade and redesign. The new trigger system, based on multiple families of custom boards, equipped with Xilinx Ultrascale Plus FPGAs and interconnected with high speed optical links at 25 Gb/s, will exploit more detailed information from the detector subsystems (calorimeter, muon systems, tracker). In...
Go to contribution page -
John Lawrence (University of Notre Dame (US))25/10/2022, 16:10
With the start of run 3 in 2022, the LHC has entered a new period, now delivering higher energy and luminosity proton beams to the Compact Muon Solenoid (CMS) experiment. These increases make it critical to maintain and upgrade the tools and methods used to monitor the rate at which data is collected (the trigger rate). Software tools have been developed to allow for automated rate monitoring,...
Go to contribution page -
Bernhard Manfred Gruber (Technische Universitaet Dresden (DE))25/10/2022, 16:10
Choosing the best memory layout for each hardware architecture is increasingly important as more and more programs become memory bound. For portable codes that run across heterogeneous hardware architectures, the choice of the memory layout for data structures is ideally decoupled from the rest of a program.
Go to contribution page
The low-level abstraction of memory access (LLAMA) is a C++ library that provides a... -
Giulia Lavizzari25/10/2022, 16:10
We present a machine-learning based method to detect deviations from a reference model, in an almost independent way with respect to the theory assumed to describe the new physics responsible for the discrepancies.
The analysis is based on an Effective Field Theory (EFT) approach: under this hypothesis the Lagrangian of the system can be written as an infinite expansion of terms, where the...
Go to contribution page -
Moritz David Bauer25/10/2022, 16:10
The Belle II experiment at the second generation e+/e- B-factory SuperKEKB has been collecting data since 2019 and aims to accumulate 50 times more data than the first generation experiment, Belle.
Go to contribution page
To efficiently process these steadily growing datasets of recorded and
simulated data that end up on the order of 100 PB and to support
Grid-based analysis workflows using the DIRAC Workload... -
Mate Zoltan Farkas (Rheinisch Westfaelische Tech. Hoch. (DE))25/10/2022, 16:40
The continuous growth in model complexity in high-energy physics (HEP) collider experiments demands increasingly time-consuming model fits. We show first results on the application of conditional invertible networks (cINNs) to this challenge. Specifically, we construct and train a cINN to learn the mapping from signal strength modifiers to observables and its inverse. The resulting network...
Go to contribution page -
Jack Y. Araz (IPPP - Durham University)25/10/2022, 16:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Tensor Networks (TN) are approximations of high-dimensional tensors designed to represent locally entangled quantum many-body systems efficiently. In this talk, we will discuss how to use TN to connect quantum mechanical concepts to machine learning techniques, thereby facilitating the improved interpretability of neural networks. As an application, we will use top jet classification against...
Go to contribution page -
Mr Xuyang Zhou25/10/2022, 16:40Track 1: Computing Technology for Physics ResearchOral
With the continuous increase in the amount of large data generated and stored in various scientific fields ,such as cosmic ray detection, compression technology becomes more and more important in reducing the requirements for communication bandwidth and storage capacity. Zstandard, abbreviated as zstd, is a fast lossless compression algorithm. For zlib-level real-time compression scenarios, it...
Go to contribution page -
Michele Grossi (CERN)25/10/2022, 17:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The potential exponential speed-up of quantum computing compared to classical computing makes it to a promising method for High Energy Physics (HEP) simulations at the LHC at CERN.
Go to contribution page
Generative modeling is a promising task for near-term quantum devices, the probabilistic nature of quantum mechanics allows us to exploit a new class of generative models: quantum circuit Born machine (QCBM).... -
Moonzarin Reza (Texas A&M University)25/10/2022, 17:00
Constraining cosmological parameters, such as the amount of dark matter and dark energy, to high precision requires very large quantities of data. Modern survey experiments like DES, LSST, and JWST, are acquiring these data sets. However, the volumes and complexities of these data – variety, systematics, etc. – show that traditional analysis methods are insufficient to exhaust the information...
Go to contribution page -
Yueyang Ying (Massachusetts Inst. of Technology (US))25/10/2022, 17:00Track 1: Computing Technology for Physics ResearchOral
Lossy compression algorithms are incredibly useful due to powerful compression results. However, lossy compression has historically presented a trade-off between the retained precision and the resulting size of data compressed with a lossy algorithm. Previously, we introduced BLAST, a state-of-the-art compression algorithm developed by Accelogic. We presented results that demonstrated BLAST...
Go to contribution page -
Oksana Shadura (University of Nebraska Lincoln (US))25/10/2022, 17:20
The IRIS-HEP Analysis Grand Challenge (AGC) is designed to be a realistic environment for investigating how analysis methods scale to the demands of the HL-LHC. The analysis task is based on publicly available Open Data and allows for comparing usability and performance of different approaches and implementations. It includes all relevant workflow aspects from data delivery to statistical...
Go to contribution page -
Dr Meifeng Lin (Brookhaven National Laboratory (US))25/10/2022, 17:20Track 1: Computing Technology for Physics ResearchOral
The evolution of the computing landscape has resulted in the proliferation of diverse hardware architectures, with different flavors of GPUs and other compute accelerators becoming more widely available. To facilitate the efficient use of these architectures in a heterogeneous computing environment, several programming models are available to enable portability and performance across different...
Go to contribution page -
Oriel Orphee Moira Kiss (Universite de Geneve (CH))25/10/2022, 17:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Accurate molecular force fields are of paramount importance for the efficient implementation of molecular dynamics techniques at large scales. In the last decade, machine learning methods have demonstrated impressive performances in predicting accurate values for energy and forces when trained on finite size ensembles generated with ab initio techniques. At the same time, quantum computers...
Go to contribution page -
Baidyanath Kundu (Princeton University (US))25/10/2022, 17:40Track 1: Computing Technology for Physics ResearchOral
The simplicity of Python and the power of C++ provide a hard choice for a scientific software stack. There have been multiple developments to mitigate the hard language boundaries by implementing language bindings. The static nature of C++ and the dynamic nature of Python are problematic for bindings provided by library authors and in particular features such as template instantiations with...
Go to contribution page -
Maximilian Mucha (University of Bonn (DE))25/10/2022, 17:40
The Federation is a new machine learning technique for handling large amounts of data in a typical high-energy physics analysis. It utilizes Uniform Manifold Approximation and Projection (UMAP) to create an initial low-dimensional representation of a given data set, which is clustered by using Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN). These clusters...
Go to contribution page -
Axel Naumann (CERN), Lucia Silvestris (Universita e INFN, Bari (IT))26/10/2022, 09:15
-
Anna Scaife (University of Manchester)26/10/2022, 09:30Plenary
The expected volume of data from the new generation of scientific facilities such as the Square Kilometre Array (SKA) radio telescope has motivated the expanded use of semi-automatic and automatic machine learning algorithms for scientific discovery in astronomy. In this field, the robust and systematic use of machine learning faces a number of specific challenges, including both a lack of...
Go to contribution page -
Andrea Wulzer (Universita e INFN, Padova (IT))26/10/2022, 10:00Plenary
Strategies to detect data departures from a given reference model, with no prior bias on the nature of the new physical model responsible for the discrepancy might play a vital role in experimental programs where, like at the LHC, increasingly rich experimental data are accompanied by an increasingly blurred theoretical guidance in their interpretation. I will describe one such strategy that...
Go to contribution page -
Johann Usovitsch26/10/2022, 10:30Plenary
The new concepts of future electron-positron colliders such as Future Circular Collider, International Linear Collider or Circular Electron-Positron Collider push the precision state-of-the-art in experimental measurements. The tremendous efforts of experimental physicists to test the immense predictive power of the Standard Model are limited by the intrinsic uncertainties in the currently...
Go to contribution page -
Yu Hu26/10/2022, 11:00
The high-performance fourth-generation synchrotron radiation light source, e.g., the High Energy Photon Source (HEPS) has been proposed and built successively. The advent of beamlines at fourth-generation synchrotron sources and the advanced detector has made significant progress that push the demand for computing resource at the edge of current workstation capabilities. On the other hand, the...
Go to contribution page -
Yu Gao26/10/2022, 11:00
ROOT TTree has been widely used in the analysis and storage of various high-energy physical experiment data. The event data generated by the experiment is stored in TTree's bunch and further compressed and archived into a standard ROOT format file. At present, ROOT supports the compression storage of TBasket, the buffer of TBranch, using compression algorithms such as zlib, lzma, lz4, zstd,...
Go to contribution page -
Max Fischer (Karlsruhe Institute of Technology)26/10/2022, 11:00
Modern high energy physics experiments and similar compute intensive fields are pushing the limits of dedicated grid and cloud infrastructure. In the past years research into augmenting this dedicated infrastructure by integrating opportunistic resources, i.e. compute resources temporarily acquired from third party resource providers, has yielded various strategies to approach this challenge....
Go to contribution page -
Andrew Schick26/10/2022, 11:00
A precise measurement of the polarizability of the charged pion provides an important experimental test of our understanding of low-energy QCD. The goal of the Charged Pion Polarizability (CPP) experiment in Hall D at JLab, currently underway, is to make a precision measurement of this quantity through a high statistics study of the γγ → π+π− reaction near 2π threshold. The production of...
Go to contribution page -
Corentin Allaire (Université Paris-Saclay (FR)), Rocky Bala Garg (Stanford University (US))26/10/2022, 11:00
The reconstruction of particle trajectories is a key challenge of particle physics experiments as it directly impacts particle reconstruction and physics performances. To reconstruct these trajectories, different reconstruction algorithms are used sequentially. Each of these algorithms use many configuration parameters that need to be fine-tuned to properly account for the...
Go to contribution page -
Yuyi Wang (Tsinghua University)26/10/2022, 11:00
One way to improve the position and energy resolution in neutrino experiments, is to give parameters with high resolution to the reconstruction method. These parameters, the photon electron(PE) hit time and the expectation of PE count, can be analyzed from the waveforms. We developed a new waveform analysis method called Fast Scholastic Matching Pursuit(FSMP). It is based on Bayesian...
Go to contribution page -
Ms Xiaoqian Jia (Shandong University)26/10/2022, 11:00
Track reconstruction (or tracking) plays an essential role in the offline data processing of collider experiments. For the BESIII detector working in the tau-charm energy region, plenty of efforts were made previously to improve the tracking performance with traditional methods, such as pattern recognition and Hough transform etc. However, for challenging tasks, such as the tracking of low...
Go to contribution page -
Simon Schnake (DESY / RWTH Aachen University)26/10/2022, 11:00
In particle physics, precise simulations are necessary to enable scientific progress. However, accurate simulations of the interaction processes in calorimeters are complex and computationally very expensive, demanding a large fraction of the available computing resources in particle physics at present. Various generative models have been proposed to reduce this computational cost. Usually,...
Go to contribution page -
Carlos Perez Dengra (PIC-CIEMAT)26/10/2022, 11:00
The Xrootd protocol is used by CMS experiment of LHC to access, transfer, and store data within Worldwide LHC Computing Grid (WLCG) sites running different kinds of jobs on their compute nodes. Its redirector system allows some execution tasks to run by accessing input data that is stored on any WLCG site. In 2029 the Large Hadron Collider (LHC) will start the High-Luminosity LHC (HL-LHC)...
Go to contribution page -
Tao Lin (Chinese Academy of Sciences (CN))26/10/2022, 11:00
The Jiangmen Underground Neutrino Observatory (JUNO) has a very rich physics program which primarily aims to the determination of the neutrino mass ordering and to the precisely measurement of oscillation parameters. It is under construction in South China at a depth of about 700~m underground. As data taking will start in 2023, a complete data processing chain is developed before the data...
Go to contribution page -
Anish Biswas (Princeton University (US))26/10/2022, 11:00
Awkward Array is a library for nested, variable-sized data, including arbitrary-length lists, records, mixed types, and missing data, using NumPy-like idioms. Auto-differentiation (also known as “autograd” and “autodiff”) is a technique for computing the derivative of a function defined by an algorithm, which requires the derivative of all operations used in that algorithm to be known.
The...
Go to contribution page -
Ameya Thete (Birla Institute of Technology and Science, Pilani - KK Birla Goa Campus (IN))26/10/2022, 11:00
A broad range of particle physics data can be naturally represented as graphs. As a result, Graph Neural Networks (GNNs) have gained prominence in HEP and have increasingly been adopted for a wide array of particle physics tasks, including particle track reconstruction. Most problems in physics involve data that have some underlying compatibility with symmetries. These problems may either...
Go to contribution page -
Charles Leggett (Lawrence Berkeley National Lab (US))26/10/2022, 11:00
High-energy physics (HEP) experiments have developed millions of lines of code over decades that are optimized to run on traditional x86 CPU systems. However we are seeing a rapidly increasing fraction of floating point computing power in leadership-class computing facilities and traditional data centers coming from new accelerator architectures, such as GPUs. HEP experiments are now faced...
Go to contribution page -
Matteo Barbetti (Universita e INFN, Firenze (IT))26/10/2022, 11:00
The simplest and often most effective way of parallelizing the training of complex Machine Learning models is to execute several training instances on multiple machines, possibly scanning the hyperparameter space to optimize the underlying statistical model and the learning procedure.
Go to contribution page
Often, such a meta learning procedure is limited by the ability of accessing securely a common database... -
Eric Wulff (CERN)26/10/2022, 11:00
In the European Center of Excellence in Exascale Computing "Research on AI- and Simulation-Based Engineering at Exascale" (CoE RAISE), researchers from science and industry develop novel, scalable Artificial Intelligence technologies towards Exascale. In this work, we leverage European High performance Computing (HPC) resources to perform large-scale hyperparameter optimization (HPO),...
Go to contribution page -
Tao Lin (Chinese Academy of Sciences (CN))26/10/2022, 11:00
The Jiangmen Underground Neutrino Observatory (JUNO) is under construction in South China and will start data taking in 2023. It has a central detector with a 20-kt liquid scintillator, equipped with 17,612 20-inch PMTs (photo-multiplier tubes) and 25,600 3-inch PMTs. The requirement on energy resolution of 3\%@1MeV makes the offline data processing challenging, so several machine learning...
Go to contribution page -
Erica Brondolin (CERN)26/10/2022, 11:00
CLUE is a fast and innovative density-based clustering algorithm to group digitized energy deposits (hits) left by a particle traversing the active sensors of a high-granularity calorimeter in clusters with a well-defined seed hit. Outliers, i.e. hits which do not belong to any clusters, are also identified. Its outstanding performance has been proven in the context of the CMS Phase-2 upgrade...
Go to contribution page -
Matteo Barbetti (Universita e INFN, Firenze (IT))26/10/2022, 11:00
About 90% of the computing resources available to the LHCb experiment has been spent to produce simulated data samples for Run 2 of the Large Hadron Collider. The upgraded LHCb detector will operate at much-increased luminosity, requiring many more simulated events for the Run 3. Simulation is a key necessity of analysis to interpret data in terms of signal and background and estimate relevant...
Go to contribution page -
Alessandra Carlotta Re (Universita' degli Studi & INFN of Milano (Italy))26/10/2022, 11:00
The Jiangmen Underground Neutrino Observatory (JUNO) is under construction in South China at a depth of about 700~m underground: the data taking is expected to start in late 2023. JUNO has a very rich physics program which primarily aims to the determination of the neutrino mass ordering and to the precisely measurement of oscillation parameters.
Go to contribution page
The JUNO average raw data volume is expected... -
Thomas Madlener (Deutsches Elektronen-Synchrotron (DESY))26/10/2022, 11:00
The podio event data model (EDM) toolkit provides an easy way to generate a performant implementation of an EDM from a high level description in yaml format. We present the most recent developments in podio, most importantly the inclusion of a schema evolution mechanism for generated EDMs as well as the "Frame", a thread safe, generalized event data container. For the former we discuss some of...
Go to contribution page -
Tim Voigtlaender (KIT - Karlsruhe Institute of Technology (DE))26/10/2022, 11:00
Machine Learning (ML) applications, which have become quite common tools for many High Energy Physics (HEP) analyses, benefit significantly from GPU resources. GPU clusters are important to fulfill the rapidly increasing demand for GPU resources in HEP. Therefore, the Karlsruhe Institute of Technology (KIT) provides a GPU cluster for HEP accessible from the physics institute via its batch...
Go to contribution page -
Davide Valsecchi (ETH Zurich (CH))26/10/2022, 11:00
The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL). These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material (beampipe, tracker and...
Go to contribution page -
Adriano Di Florio (Politecnico e INFN, Bari), Giorgio Pizzati (Universita & INFN, Milano-Bicocca (IT))26/10/2022, 11:00
The future development projects for the Large Hadron Collider will constantly bring nominal luminosity increase, with the ultimate goal of reaching a peak luminosity of $5 \times 10^{34} cm^{−2} s^{−1}$. This would result in up to 200 simultaneous proton collisions (pileup), posing significant challenges for the CMS detector reconstruction.
The CMS primary vertex (PV) reconstruction is a...
Go to contribution page -
Federico Scutti (Swinburne University of Technology)26/10/2022, 11:00
The pyrate framework provides a dynamic, versatile, and memory-efficient approach to data format transformations, object reconstruction and data analysis in particle physics. Developed within the context of the SABRE experiment for dark matter direct detection, pyrate relies on a blackboard design pattern where algorithms are dynamically evaluated throughout a run and scheduled by a central...
Go to contribution page -
Florian Reiss (University of Manchester (GB))26/10/2022, 11:00
The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on studying decays of c- and b-hadrons. For Run 3 of the LHC (data taking from 2022), LHCb will take data at an instantaneous luminosity of 2 × 10^{33} cm−2 s−1, five times higher than in Run 2 (2015-2018). To cope with the harsher data taking conditions, LHCb will deploy a purely software based...
Go to contribution page -
Melvin Strobl26/10/2022, 11:00
Quantum Computing and Machine Learning are both significant and appealing research fields. In particular, the combination of both has led to the emergence of the research field of quantum machine learning which has recently taken enormous popularity. We investigate in the potential advantages of this synergy for the application in high energy physics, more precisely in the reconstruction of...
Go to contribution page -
Manos Vourliotis (Univ. of California San Diego (US))26/10/2022, 11:00
One of the most challenging computational problems in the Run 3 of the Large Hadron Collider (LHC) and more so in the High-Luminosity LHC (HL-LHC) is expected to be finding and fitting charged-particle tracks during event reconstruction. The methods used so far at the LHC and in particular at the CMS experiment are based on the Kalman filter technique. Such methods have shown to be robust and...
Go to contribution page -
Valentin Volkl (CERN)26/10/2022, 11:00
The Key4hep project aims to provide a turnkey software solution for the full experiment life-cycle, based on established community tools. Several future collider communities (CEPC, CLIC, EIC, FCC, and ILC) have joined to develop and adapt their workflows to use the common data model EDM4hep and common framework. Besides sharing of existing experiment workflows, one focus of the Key4hep project...
Go to contribution page -
Annabel Kropf (DESY Hamburg)26/10/2022, 11:00
LUXE (Laser Und XFEL Experiment) is a proposed experiment at DESY using the electron beam of the European XFEL and a high-intensity laser. LUXE will study Quantum Electrodynamics (QED) in the strong-field regime, where QED becomes non-perturbative. One of the key measurements is the positron rate from electron-positron pair creation, which is enabled by the use of a silicon tracking detector....
Go to contribution page -
Patrick Rieck (New York University (US))26/10/2022, 11:30
Searches for new physics set exclusion limits in parameter spaces of typically up to 2 dimensions. However, the relevant theory parameter space is usually of a higher dimension but only a subspace is covered due to the computing time requirements of signal process simulations. An Active Learning approach is presented to address this limitation. Compared to the usual grid sampling, it reduces...
Go to contribution page -
Giuseppe De Laurentis (Freiburg University)26/10/2022, 11:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
High-multiplicity loop-level amplitude computations involve significant algebraic complexity, which is usually sidestepped by employing numerical routines. Yet, when available, final analytical expressions can display improved numerical stability and reduced evaluation times. It has been shown that significant insights into the analytic structure of the results can be obtained by tailored...
Go to contribution page -
Elena Gazzarrini (CERN)26/10/2022, 11:30Track 1: Computing Technology for Physics ResearchOral
One of the objectives of the EOSC (European Open Science Cloud) Future Project is to integrate diverse analysis workflows from Cosmology, Astrophysics and High Energy Physics in a common framework. The project’s development relies on the implementation of the Virtual Research Environment (VRE), a prototype platform supporting the goals of Dark Matter and Extreme Universe Science Projects in...
Go to contribution page -
Stefano Bagnasco (Istituto Nazionale di Fisica Nucleare, Torino)26/10/2022, 11:50Track 1: Computing Technology for Physics ResearchOral
The LIGO, VIRGO and KAGRA Gravitational-wave interferometers are getting ready for their fourth observational period, scheduled to begin in March 2023, with improved sensitivities and higher event rates.
Data from the interferometers are exchanged between the three collaborations and processed by running search pipelines for a range of expected signals, from coalescing compact binaries to...
Go to contribution page -
Egor Danilov (Fermilab and EPFL)26/10/2022, 11:50
The Hubble Tension presents a crisis for the canonical LCDM model of modern cosmology: it may originate in systematics in data processing pipelines or it may come from new physics related to dark matter and dark energy. The aforementioned crisis can be addressed by studies of time-delayed light curves of gravitationally lensed quasars, which have the capacity to constrain the Hubble constant...
Go to contribution page -
Ryan Moodie (Turin University)26/10/2022, 11:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
I will discuss the analytic calculation of two-loop five-point helicity amplitudes in massless QCD. In our workflow, we perform the bulk of the computation using finite field arithmetic, avoiding the precision-loss problems of floating-point representation. The integrals are provided by the pentagon functions. We use numerical reconstruction techniques to bypass intermediate complexity and...
Go to contribution page -
Jakob Karl Eberhardt (University of Applied Sciences (DE))26/10/2022, 12:10Track 1: Computing Technology for Physics ResearchOral
Since its inception, the minimal Linux image CernVM provides a portable and reproducible runtime environment for developing and running scientific software. Its key ingredient is the tight coupling with the CernVM-FS client to provide access to the base platform (operating system and tools) as well as the experiment application software. Up to now, CernVM images are designed to use full...
Go to contribution page -
Martin Eriksen26/10/2022, 12:10
PAUS is a 40 narrow-band imaging survey using the PAUCam instrument installed at
Go to contribution page
the William Herschel Telescope (WHT). Since the survey started in 2015, this
instrument has acquired a unique dataset, performing a relatively deep and
wide survey, but with a simultaneous excelled redshift accuracy. The survey
is a compromise in performance between deep spectroscopic survey and wide
field... -
Garima Singh (Princeton University (US)), Vassil Vasilev (Princeton University (US))26/10/2022, 12:30
RooFit is a toolkit for statistical modeling and fitting used by most experiments in particle physics. Just as data sets from next-generation experiments grow, processing requirements for physics analysis become more computationally demanding, necessitating performance optimizations for RooFit. One possibility to speed-up minimization and add stability is the use of automatic differentiation...
Go to contribution page -
Stephen Nicholas Swatman (University of Amsterdam (NL))26/10/2022, 14:15Track 1: Computing Technology for Physics ResearchOral
Vector fields are ubiquitous mathematical structures in many scientific domains including high-energy physics where — among other things — they are used to represent magnetic fields. Computational methods in these domains require methods for storing and accessing vector fields which are both highly performant and usable in heterogeneous environments. In this paper we present...
Go to contribution page -
Henri Hugo Sieber (ETH Zurich (CH))26/10/2022, 14:15Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The search of New Physics through Dark Sectors is an exciting possibility to explain, among others, the origin of Dark Matter (DM). Within this context, the sensitivity study of a given experiment is a key point in estimating its potential for discovery. In this contribution we present the fully GEANT4-compatible Monte Carlo simulation package for production and propagation of DM particles,...
Go to contribution page -
Vincenzo Eduardo Padulano (Valencia Polytechnic University (ES))26/10/2022, 14:15
The growing amount of data generated by the LHC requires a shift in how HEP analysis tasks are approached. Usually, the workflow involves opening a dataset, selecting events, and computing relevant physics quantities to aggregate into histograms and summary statistics. The required processing power is often so high that the work needs to be distributed over multiple cores and multiple nodes....
Go to contribution page -
Teng LI (Shandong University, CN)26/10/2022, 14:35
The Jiangmen Underground Neutrino Observation (JUNO) experiment is designed to measure the neutrino mass order (NMO) using a 20-kton liquid scintillator detector to solve one of the biggest remaining puzzles in neutrino physics. Regarding the sensitivity of JUNO’s NMO measurement, besides the precise measurement of reactor neutrinos, the independent measurement of the atmospheric neutrino...
Go to contribution page -
Alberto Martini (DESY)26/10/2022, 14:35Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The Belle II is an experiment taking data from 2019 at the asymmetric e+e- SuperKEKB collider, a second generation B-factory, at Tsukuba, Japan. Its goal is to perform high precision measurements of flavor physics observables One of the many challenges of the experiment is to have a Monte Carlo simulation with very accurate modeling of the detector, including any variation occurring during...
Go to contribution page -
Danilo Piparo (CERN)26/10/2022, 14:35Track 1: Computing Technology for Physics ResearchOral
The CMS simulation, reconstruction, and HLT code have been used to deliver an enormous number of events for analysis during Runs 1 and 2 of the LHC at CERN. In fact, these techniques have been regarded as of fundamental importance for the CMS experiment. In the following arguments presented, several ways to improve efficiency of these procedures will be described and it will be displayed how...
Go to contribution page -
Maxim Potekhin (Brookhaven National Laboratory (US))26/10/2022, 14:55
The sPHENIX experiment at RHIC requires substantial computing power for its complex reconstruction algorithms. One class of these algorithms is tasked with processing signals collected from the sPHENIX calorimeter subsystems, in order to extract signal features such as the amplitude, timing of the peak and the pedestal. These values, calculated for each channel, form the basis of event...
Go to contribution page -
Timo Janssen26/10/2022, 14:55Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The generation of unit-weight events for complex scattering processes presents a severe challenge to modern Monte Carlo event generators. Even when using sophisticated phase-space sampling techniques adapted to the underlying transition matrix elements, the efficiency for generating unit-weight events from weighted samples can become a limiting factor in practical applications. Here we present...
Go to contribution page -
Aryan Roy26/10/2022, 14:55Track 1: Computing Technology for Physics ResearchOral
Uproot reads ROOT TTrees using pure Python. For numerical and (singly) jagged arrays, this is fast because a whole block of data can be interpreted as an array without modifying the data. For other cases, such as arrays of
Go to contribution pagestd::vector<std::vector<float>>, numerical data are interleaved with structure, and the only way to deserialize them is with a sequential algorithm. When written in... -
ankur singha26/10/2022, 15:15Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In Lattice Field Theory, one of the key drawbacks of the Markov Chain Monte Carlo(MCMC) simulation is the critical slowing down problem. Generative machine learning methods, such as normalizing flows, offer a promising solution to speed up MCMC simulations, especially in the critical region. However, training these models for different parameter values of the lattice theory is inefficient. We...
Go to contribution page -
Mr Yu Hu (Institute of High Energy Physics, CAS), Ms Xiaomeng Qiu (Zhengzhou University)26/10/2022, 15:15
Nowadays, medical images play a mainstay role in medical diagnosis, and computer tomography, nuclear magnetic resonance, ultrasound and other imaging technologies have become a powerful means of in vitro imaging. Extracting lesion information from these images can enable doctors to observe and diagnose the lesion more effectively, so as to improve the accuracy of quasi diagnosis. Therefore,...
Go to contribution page -
Marco Lorusso (Universita e INFN, Bologna (IT))26/10/2022, 15:15Track 1: Computing Technology for Physics ResearchOral
In the past few years, using Machine and Deep Learning techniques has become more and more viable, thanks to the availability of tools which allow people without specific knowledge in the realm of data science and complex networks to build AIs for a variety of research fields. This process has encouraged the adoption of such techniques: in the context of High Energy Physics, new algorithms...
Go to contribution page -
Gokhan Unel (University of California Irvine (US))26/10/2022, 15:35Track 1: Computing Technology for Physics ResearchOral
Use of declarative languages for HEP data analysis is an emerging, promising approach. One highly developed example is ADL (Analysis Description Language), an external domain specific language that expresses the analysis physics algorithm in a standard and unambiguous way, independent of frameworks. The most advanced infrastructure that executes an analysis written in the formal ADL syntax...
Go to contribution page -
Su Yeon Chang (EPFL - Ecole Polytechnique Federale Lausanne (CH))26/10/2022, 15:35
Earth Observation (EO) has experienced promising progress in the modern era via an impressive amount of research on establishing a state-of-the-art Machine Learning (ML) technique to learn a large dataset. Meanwhile, the scientific community has also extended the boundary of ML to the quantum system and exploited a new research area, so-called Quantum Machine Learning (QML), to integrate...
Go to contribution page -
Axel Naumann (CERN), Lucia Silvestris (Universita e INFN, Bari (IT))27/10/2022, 09:15
-
Martin Han Minh27/10/2022, 09:30Plenary
The interpretation of detector data to observables that we can use to perform our physics analyses is an essential part in modern day experimental physics. It is also a field among the biggest profiteers in the recent advances of machine learning. In this contribution we want to highlight our event reconstruction efforts using Graph Neural Networks in the IceCube experiment. Using a...
Go to contribution page -
John Smith (IBM T. J. Watson Research Center)27/10/2022, 10:00Plenary
AI is making an enormous impact on scientific discovery. Growing volumes of data across scientific domains are enabling the use of machine learning at ever increasing scale to accelerate discovery. Examples include using knowledge extraction and reasoning over large repositories of scientific publications to quickly study scientific questions or even come up with new questions, applying AI...
Go to contribution page -
Enrico Guiraud (EP-SFT, CERN)27/10/2022, 10:30Plenary
The production, validation and revision of data analysis applications is an iterative process that occupies a large fraction of a researcher's time-to-publication.
Go to contribution page
Providing interfaces that are simpler to use correctly and more performant out-of-the-box not only reduces the community's average time-to-insight but it also unlocks completely novel approaches that were previously impractically... -
Marcel Hohmann27/10/2022, 11:00
The Belle II experiment has been taking data at the SuperKEKB collider since 2018. Particle identification is a key component of the reconstruction, and several detector upgrades from Belle to Belle II were designed to maintain performance with the higher background rates.
Go to contribution page
We present a method for a data-driven calibration that improves the overall particle identification performance and is... -
Maggie Voetberg, Sophia Zhou27/10/2022, 11:00
The size, complexity, and duration of telescope surveys are growing beyond the capacity of traditional methods for scheduling observations. Scheduling algorithms must have the capacity to balance multiple (often competing) observational and scientific goals, address both short-term and long-term considerations, and adapt to rapidly changing stochastic elements (e.g., weather). Reinforcement...
Go to contribution page -
Marco Barbone27/10/2022, 11:00
In this work we present the adaptation of the popular clustering algorithm DBSCAN to reconstruct the primary vertex (PV) at the hardware trigger level in collisions at the High-Luminosity LHC. Nominally, PV reconstruction is performed by a simple histogram-based algorithm. The main challenge in PV reconstruction is that the particle tracks need to be processed in a low-latency environment...
Go to contribution page -
Jerry 🦑 Ling (Harvard University (US))27/10/2022, 11:00
Template Bayesian inference via Automatic Differentiation in JuliaLang
Binned template-fitting is one of the most important tools in the High-Energy physics (HEP) statistics toolbox. Statistical models based on combinations of histograms are often the last step in a HEP physics analysis. Both model and data can be represented in a standardized format - HistFactory (C++/XML) and more...
Go to contribution page -
Svenja Diekmann (Rheinisch Westfaelische Tech. Hoch. (DE))27/10/2022, 11:00
The usage of Deep Neural Networks (DNNs) as multi-classifiers is widespread in modern HEP analyses. In standard categorisation methods, the high-dimensional output of the DNN is often reduced to a one-dimensional distribution by exclusively passing the information about the highest class score to the statistical inference method. Correlations to other classes are hereby omitted.
Go to contribution page
Moreover, in... -
Nick Smith (Fermi National Accelerator Lab. (US))27/10/2022, 11:00
To support the needs of novel collider analyses such as long-lived particle searches, considerable computing resources are spent forward-copying data products from low-level data tiers like CMS AOD and MiniAOD to reduced data formats for end-user analysis tasks. In the HL-LHC era, it will be increasingly difficult to ensure online access to low-level data formats. In this talk, we present a...
Go to contribution page -
Brunella D'Anzi (Universita e INFN, Bari (IT))27/10/2022, 11:00
The large statistical fluctuations in the ionization energy loss high energy physics process by charged particles in gaseous detectors implies that many measurements are needed along the particle track to get a precise mean, and this represent a limit to the particle separation capabilities that should be overcome in the design of future colliders. The cluster counting technique (dN/dx)...
Go to contribution page -
Saransh Chopra (Cluster Innovation Centre, University of Delhi)27/10/2022, 11:00
Due to the massive nature of HEP data, performance has always been a factor in its analysis and processing. Languages like C++ would be fast enough but are often challenging to grasp for beginners, and can be difficult to iterate quickly in an interactive environment . On the other hand, the ease of writing code and extensive library ecosystem make Python an enticing choice for data analysis....
Go to contribution page -
Ali Marafi (Kuwait University (KW)), Andrea Bocci (CERN)27/10/2022, 11:00
In the past years the CMS software framework (CMSSW) has been extended to offload part of the physics reconstruction to NVIDIA GPUs. This can achieve a higher computational efficiency, but it adds extra complexity to the design of dedicated data centres and the use of opportunistic resources, like HPC centres. A possible solution to increase the flexibility of heterogeneous clusters is to...
Go to contribution page -
Andrea Di Luca (Universita degli Studi di Trento and INFN (IT))27/10/2022, 11:00
HEPD-02 is a new, upgraded version of the High Energy Particle Detector as part of a suite of instruments for the second mission of the China Seismo-Electromagnetic Satellite (CSES-02) to be launched in 2023. Designed and realized by the Italian Collaboration LIMADOU of the CSES program, it is optimized to identify fluxes of charged particles (mostly electrons and protons) and determine their...
Go to contribution page -
Michael Poat27/10/2022, 11:00
In real-time computing facilities - system, network, and security monitoring are core components to run efficiently and effectively. As there are many diverse functions that can go awry, such as load, network, processes, and power issues, having a well-functioning monitoring system is imperative. In many facilities you will see the standard set of tools such as Ganglia, Grafana, Nagios, etc....
Go to contribution page -
Alexander Bogatskiy (Flatiron Institute, Simons Foundation)27/10/2022, 11:00
We hold these truths to be self-evident: that all physics problems are created unequal, that they are endowed with their unique data structures and symmetries, that among these are tensor transformation laws, Lorentz symmetry, and permutation equivariance. A lot of attention has been paid to the applications of common machine learning methods in physics experiments and theory. However, much...
Go to contribution page -
Gabor Biro (Wigner Research Centre for Physics (Wigner RCP) (HU))27/10/2022, 11:00
The ever growing increase of computing power necessary for the storage and data analysis of the high-energy physics experiments at CERN requires performance optimization of the existing and planned IT resources.
One of the main computing capacity consumers in the HEP software workflow is the data analysis. To optimize the resource usage, the concept of Analysis Facility (AF) for Run 3 has...
Go to contribution page -
Mohamed Hemdan27/10/2022, 11:00
Particle physics experiments spend large amounts of computational effort on Monte Carlo simulations. Due to the computational expense of simulations, they are often executed and stored in large distributed computing clusters. To lessen the computational cost, physicists have introduced alternatives to speed up the simulation. Generative Adversarial Networks (GANs) are an excellent...
Go to contribution page -
Josh Bendavid (CERN), Kenneth Long (Massachusetts Inst. of Technology (US))27/10/2022, 11:00
The unprecedented volume of data and Monte Carlo simulations at the HL-LHC will pose increasing challenges for data analysis both in terms of computing resource requirements as well as "time to insight". Precision measurements with present LHC data already face many of these challenges today. We will discuss performance scaling and optimization of RDataFrame for complex physics analyses,...
Go to contribution page -
Sophie Berkman27/10/2022, 11:00
Neutrino experiments that use liquid argon time projection chamber (LArTPC) detectors are growing bigger and expect to see more neutrinos with next generation beams, and therefore will require more computing resources to reach their physics goals of measuring CP violation in the neutrino sector and exploring anomalies. These resources can be used to their full capacity by incorporating...
Go to contribution page -
Nicola De Fillipis27/10/2022, 11:00
Ultra-low mass and high granularity Drift Chambers fulfill the requirements for tracking systems of modern High Energy Physics experiments at the future high luminosity facilities (FCC-ee or CEPC).
Go to contribution page
\indent We present how, in Helium based gas mixtures, by measuring the arrival times of each individual ionization cluster and by using proper statistical tools, it is possible to perform a bias... -
Zef Wolffs (Nikhef National institute for subatomic physics (NL))27/10/2022, 11:00
RooFit is a toolkit for statistical modeling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analyses become more computationally demanding. Therefore, recent RooFit developments were focused on performance optimization, in particular to...
Go to contribution page -
Hosein Karimi Khozani (IHEP)27/10/2022, 11:00
There are established classical methods to reconstruct particle tracks from recorded hits on the particle detectors. Current algorithms do this either by cut in some features, like recorded time of the hits, or by the fitting process. This is potentially error prone and resource consuming. For high noise events, these issues are more critical and this method might even fail. We have been...
Go to contribution page -
Mr Jan Stephan27/10/2022, 11:00
The alpaka library is a header-only C++17 abstraction library for development across hardware accelerators (CPUs, GPUs, FPGAs). Its aim is to provide performance portability across accelerators through the abstraction (not hiding!) of the underlying levels of parallelism. In this talk we will show the concepts behind alpaka, how it is mapped to the various underlying hardware models, and show...
Go to contribution page -
Umit Sozbilir (Universita e INFN, Bari (IT))27/10/2022, 11:00
In recent years, new technologies and new approaches have been developed in academia and industry to face the necessity to both handle and easily visualize huge amounts of data, the so-called “big data”. The increasing volume and complexity of HEP data challenge the HEP community to develop simpler and yet powerful interfaces based on parallel computing on heterogeneous platforms. Good...
Go to contribution page -
Dmitry Popov (University of Chinese Academy of Sciences (CN))27/10/2022, 11:00
Monte Carlo simulation is a vital tool for all physics programmes of particle physics experiments. Their accuracy and reliability in reproducing detector response is of the utmost importance. For the LHCb experiment, which is embarking on a new data-take era with an upgraded detector, a full suite of verifications has been put in place for its simulation software to ensure the quality of the...
Go to contribution page -
Vasilis Belis (ETH Zurich (CH))27/10/2022, 11:00
We developed supervised and unsupervised quantum machine learning models for anomaly detection tasks at the Large Hadron Collider at CERN. Current Noisy Intermediate Scale Quantum (NISQ) devices have a limited number of qubits and qubit coherence. We designed dimensionality reduction models based on Autoencoders to accommodate the constraints dictated by the quantum hardware. Different designs...
Go to contribution page -
Giovanna Lazzari Miotto (Universidade Federál Do Rio Grande Do Sul (BR))27/10/2022, 11:00
Compared to LHC Run 1 and Run 2, future HEP experiments, e.g. at the HL-LHC, will increase the volume of generated data by an order of magnitude. In order to sustain the expected analysis throughput, ROOT's RNTuple I/O subsystem has been engineered to overcome the bottlenecks of the TTree I/O subsystem, focusing also on a compact data format, asynchronous and parallel requests, and a layered...
Go to contribution page -
Lorenzo Moneta (CERN)27/10/2022, 11:00
Through its TMVA package, ROOT provides and connects to machine learning tools for data analysis at HEP experiments and beyond. In addition, ROOT provides through its powerful I/O system and RDataFrame analysis tools the capability to efficiently select and query input data from large data sets as typically used in HEP analysis. At the same time, several existing Machine Learning tools exist...
Go to contribution page -
Irina Espejo Morales (New York University (US))27/10/2022, 11:00
MadMiner is a python module that implements a powerful family of multivariate inference techniques that leverage both matrix element information and machine learning.
This multivariate approach neither requires the reduction of high-dimensional data to summary statistics nor any simplifications to the under-lying physics or detector response.
In this paper, we address some of the...
Go to contribution page -
Nathalie Soybelman (Weizmann Institute of Science (IL)), Mr Nilotpal Kakati (Weizmann Institute of Science)27/10/2022, 11:00
The feature complexity of data recorded by particle detectors combined with the availability of large simulated datasets presents a unique environment for applying state-of-the-art machine learning (ML) architectures to physics problems. We present the Simplified Cylindrical Detector (SCD): a fully configurable GEANT4 calorimeter simulation which mimics the granularity and response...
Go to contribution page -
Felice Pantaleo (CERN), Wahid Redjeb (Rheinisch Westfaelische Tech. Hoch. (DE))27/10/2022, 11:00
To sustain the harsher conditions of the high-luminosity LHC, the CMS Collaboration is designing a novel endcap calorimeter system. The new calorimeter will predominantly use silicon sensors to achieve sufficient radiation tolerance and will maintain highly granular information in the readout to help mitigate the effects of the pile up. In regions characterized by lower radiation levels, small...
Go to contribution page -
Elham E Khoda (University of Washington (US))27/10/2022, 11:00
Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent...
Go to contribution page -
Raquel Pezoa Rivera (Universidad de Valparaíso)27/10/2022, 11:00
The classification of HEP events, or separating signal events from the background, is one of the most important analysis tasks in High Energy Physics (HEP), and a foundational task in the search for new phenomena. Complex deep learning-based models have been fundamental for achieving accurate and outstanding performance in this classification task. However, the quantification of the...
Go to contribution page -
Simon David Badger (Universita e INFN Torino (IT))27/10/2022, 11:30Plenary
Precision simulations for collider phenomenology require intensive evaluations of complicated scattering amplitudes. Uncovering hidden simplicity in these basic building blocks of quantum field theory can lead us to new, efficient methods to obtain the necessary theoretical predictions. In this talk I will explore some new approaches to multi-scale loop amplitudes that can overcome...
Go to contribution page -
Sven Krippendorf27/10/2022, 12:00Plenary
In this talk I discuss how machine learning can be used for identifying underlying mathematical structures in physical systems. Geared towards relevant structures in Beyond the Standard Model Physics I will focus on how we can use ML to discover symmetries. I discuss how standard ML pipelines have to be adopted to enable such discoveries and comment on further applications of these methods in...
Go to contribution page -
Prof. Maria Spiropulu (California Institute of Technology)27/10/2022, 12:30Plenary
I will discuss fundamental particle physics intersections with quantum science and technology including embedding challenging problems on quantum computation architectures
Go to contribution page -
James David Yellen (University of Glasgow (GB))27/10/2022, 14:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
A decade of data-taking from the LHC has seen great progress in ruling out archetypal new-physics models up to high direct-production energy scales, but few persistent deviations from the SM have been seen. So as we head into the new data-taking era, it is of paramount importance to look beyond such archetypes and consider general BSM models that exhibit multiple phenomenological signatures....
Go to contribution page -
Maciej Dragula, Piyush Raikwar27/10/2022, 14:30
In high energy physics experiments, the calorimeter is a key detector measuring the energy of particles. These particles interact with the material of the calorimeter, creating cascades of secondary particles, the so-called showers. Describing development of cascades of particles relies on precise simulation methods, which is inherently slow and constitutes a challenge for HEP experiments....
Go to contribution page -
Nathan Brei (Jefferson Lab)27/10/2022, 14:30Track 1: Computing Technology for Physics ResearchOral
PHASM is a software toolkit, currently under development, for creating AI-based surrogate models of scientific code. AI-based surrogate models are widely used for creating fast and inverse simulations. The project anticipates an additional, future use case: adapting legacy code to modern hardware. Data centers are investing in heterogeneous hardware such as GPUs and FPGAs; meanwhile, many...
Go to contribution page -
Florian Rehm (CERN / RWTH Aachen University)27/10/2022, 14:50
The prospect of possibly exponential speed-up of quantum computing compared to classical computing marks it as a promising method when searching for alternative future High Energy Physics (HEP) simulation approaches. HEP simulations like at the LHC at CERN are extraordinarily complex and, therefore, require immense amounts of computing hardware resources and computing time. For some HEP...
Go to contribution page -
Andrea Pasquale (University of Milan)27/10/2022, 14:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We introduce a restricted version of the Riemann-Theta Boltzmann machine, a generalization of the Boltzmann machine with continuous visible and discrete integer valued hidden states. Though the normalizing higher dimensional Riemann-Theta function does not factorize, the restricted version can be trained efficiently with the method of score matching, which is based on the Fisher divergence. At...
Go to contribution page -
Thomas Owen James (CERN)27/10/2022, 14:50Track 1: Computing Technology for Physics ResearchOral
A novel data collection system, known as Level-1 (L1) Scouting, is being introduced as part of the L1 trigger of the CMS experiment at the CERN Large Hadron Collider. The L1 trigger of CMS, implemented in FPGA-based hardware, selects events at 100 kHz for full read-out, within a short 3 microsecond latency window. The L1 Scouting system collects and stores the reconstructed particle primitives...
Go to contribution page -
Wenxing Fang27/10/2022, 15:10
The Beijing Spectrometer III (BESIII) [1] is a particle physics experiment at the Beijing Electron–Positron Collider II (BEPC II) [2] which aims to study physics in the tau-charm region precisely. Currently, the BESIII has collected an unprecedented number of data and the statistical uncertainty is reduced significantly. Therefore, systematic uncertainty is key for getting more precise...
Go to contribution page -
Nicole Schulte (Technische Universitaet Dortmund (DE))27/10/2022, 15:10Track 1: Computing Technology for Physics ResearchOral
The data-taking conditions expected in Run 3 of the LHCb experiment will be unprecedented and challenging for the software and computing systems. Accordingly, the LHCb collaboration will pioneer the use of a software-only trigger system to cope with the increased event rate efficiently. The beauty physics programme of LHCb is heavily reliant on topological triggers. These are devoted to...
Go to contribution page -
Felix Hekhorn27/10/2022, 15:10Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Continuously comparing theory predictions to experimental data is a common task in analysis of particle physics such as fitting parton distribution functions (PDFs). However, typically, both the computation of scattering amplitudes and the evolution of candidate PDFs from the fitting scale to the process scale are non-trivial, computing intesive tasks. We develop a new stack of software tools...
Go to contribution page -
Laurits Tani (National Institute of Chemical Physics and Biophysics (EE))27/10/2022, 15:30
In contemporary high energy physics (HEP) experiments the analysis of vast amounts of data represents a major challenge. In order to overcome this challenge various machine learning (ML) methods are employed. However, in addition to the choice of the ML algorithm a multitude of algorithm-specific parameters, referred to as hyperparameters, need to be specified in practical applications of ML...
Go to contribution page -
Dr Daniel Maitre (IPPP, Durham University)27/10/2022, 15:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In this presentation I will show how one can perform parametric integrations using a neural network. This could be applied for example to perform the integration over the auxiliary parameters in the integrals that result from the sector decomposition of multi-loop integrals.
Go to contribution page -
Giulia Tuci (University of Chinese Academy of Sciences (CN))27/10/2022, 15:30Track 1: Computing Technology for Physics ResearchOral
In the past four years, the LHCb experiment has been extensively upgraded, and it is now ready to start Run 3 performing a full real-time reconstruction of all collision events, at the LHC average rate of 30 MHz. At the same time, an even more ambitious upgrade is already being planned (LHCb "Upgrade-II"), and intense R&D is ongoing to boost the real-time processing capability of the...
Go to contribution page -
Alessandro Lonardo (Sapienza Universita e INFN, Roma I (IT))27/10/2022, 15:50Track 1: Computing Technology for Physics ResearchOral
APEIRON is a framework encompassing the general architecture of a distributed heterogeneous processing platform and the corresponding software stack, from the low level device drivers up to the high level programming model.
Go to contribution page
The framework is designed to be efficiently used for studying, prototyping and deploying smart trigger and data acquisition (TDAQ) systems for high energy physics... -
Daniela Mascione (Universita degli Studi di Trento and INFN (IT))27/10/2022, 15:50
Deep Learning algorithms are widely used among the experimental high energy physics communities and have proved to be extremely useful in addressing a variety of tasks. One field of application for which Deep Neural Networks can give a significant improvement is event selection at trigger level in collider experiments. In particular, trigger systems benefit from the implementation of Deep...
Go to contribution page -
Marcel Hohmann27/10/2022, 16:10
The Belle II experiment has been taking data at the SuperKEKB collider since 2018. Particle identification is a key component of the reconstruction, and several detector upgrades from Belle to Belle II were designed to maintain performance with the higher background rates.
Go to contribution page
We present a method for a data-driven calibration that improves the overall particle identification performance and is... -
Maggie Voetberg, Sophia Zhou27/10/2022, 16:10
The size, complexity, and duration of telescope surveys are growing beyond the capacity of traditional methods for scheduling observations. Scheduling algorithms must have the capacity to balance multiple (often competing) observational and scientific goals, address both short-term and long-term considerations, and adapt to rapidly changing stochastic elements (e.g., weather). Reinforcement...
Go to contribution page -
Marco Barbone27/10/2022, 16:10
In this work we present the adaptation of the popular clustering algorithm DBSCAN to reconstruct the primary vertex (PV) at the hardware trigger level in collisions at the High-Luminosity LHC. Nominally, PV reconstruction is performed by a simple histogram-based algorithm. The main challenge in PV reconstruction is that the particle tracks need to be processed in a low-latency environment...
Go to contribution page -
Jerry 🦑 Ling (Harvard University (US))27/10/2022, 16:10
Template Bayesian inference via Automatic Differentiation in JuliaLang
Binned template-fitting is one of the most important tools in the High-Energy physics (HEP) statistics toolbox. Statistical models based on combinations of histograms are often the last step in a HEP physics analysis. Both model and data can be represented in a standardized format - HistFactory (C++/XML) and more...
Go to contribution page -
Svenja Diekmann (Rheinisch Westfaelische Tech. Hoch. (DE))27/10/2022, 16:10
The usage of Deep Neural Networks (DNNs) as multi-classifiers is widespread in modern HEP analyses. In standard categorisation methods, the high-dimensional output of the DNN is often reduced to a one-dimensional distribution by exclusively passing the information about the highest class score to the statistical inference method. Correlations to other classes are hereby omitted.
Go to contribution page
Moreover, in... -
Nick Smith (Fermi National Accelerator Lab. (US))27/10/2022, 16:10
To support the needs of novel collider analyses such as long-lived particle searches, considerable computing resources are spent forward-copying data products from low-level data tiers like CMS AOD and MiniAOD to reduced data formats for end-user analysis tasks. In the HL-LHC era, it will be increasingly difficult to ensure online access to low-level data formats. In this talk, we present a...
Go to contribution page -
Brunella D'Anzi (Universita e INFN, Bari (IT))27/10/2022, 16:10
The large statistical fluctuations in the ionization energy loss high energy physics process by charged particles in gaseous detectors implies that many measurements are needed along the particle track to get a precise mean, and this represent a limit to the particle separation capabilities that should be overcome in the design of future colliders. The cluster counting technique (dN/dx)...
Go to contribution page -
Saransh Chopra (Cluster Innovation Centre, University of Delhi)27/10/2022, 16:10
Due to the massive nature of HEP data, performance has always been a factor in its analysis and processing. Languages like C++ would be fast enough but are often challenging to grasp for beginners, and can be difficult to iterate quickly in an interactive environment . On the other hand, the ease of writing code and extensive library ecosystem make Python an enticing choice for data analysis....
Go to contribution page -
Felix Wagner (HEPHY Vienna)27/10/2022, 16:10
Cryogenic phonon detectors are used by direct detection dark matter experiments to achieve sensitivity to light dark matter particle interactions. Such detectors consist of a target crystal equipped with a superconducting thermometer. The temperature of the thermometer and the bias current in its readout circuit need careful optimization to achieve optimal sensitivity of the detector. This...
Go to contribution page -
Ali Marafi (Kuwait University (KW)), Andrea Bocci (CERN)27/10/2022, 16:10
In the past years the CMS software framework (CMSSW) has been extended to offload part of the physics reconstruction to NVIDIA GPUs. This can achieve a higher computational efficiency, but it adds extra complexity to the design of dedicated data centres and the use of opportunistic resources, like HPC centres. A possible solution to increase the flexibility of heterogeneous clusters is to...
Go to contribution page -
Andrea Di Luca (Universita degli Studi di Trento and INFN (IT))27/10/2022, 16:10
HEPD-02 is a new, upgraded version of the High Energy Particle Detector as part of a suite of instruments for the second mission of the China Seismo-Electromagnetic Satellite (CSES-02) to be launched in 2023. Designed and realized by the Italian Collaboration LIMADOU of the CSES program, it is optimized to identify fluxes of charged particles (mostly electrons and protons) and determine their...
Go to contribution page -
Michael Poat27/10/2022, 16:10
In real-time computing facilities - system, network, and security monitoring are core components to run efficiently and effectively. As there are many diverse functions that can go awry, such as load, network, processes, and power issues, having a well-functioning monitoring system is imperative. In many facilities you will see the standard set of tools such as Ganglia, Grafana, Nagios, etc....
Go to contribution page -
Alexander Bogatskiy (Flatiron Institute, Simons Foundation)27/10/2022, 16:10
We hold these truths to be self-evident: that all physics problems are created unequal, that they are endowed with their unique data structures and symmetries, that among these are tensor transformation laws, Lorentz symmetry, and permutation equivariance. A lot of attention has been paid to the applications of common machine learning methods in physics experiments and theory. However, much...
Go to contribution page -
Gabor Biro (Wigner Research Centre for Physics (Wigner RCP) (HU))27/10/2022, 16:10
The ever growing increase of computing power necessary for the storage and data analysis of the high-energy physics experiments at CERN requires performance optimization of the existing and planned IT resources.
One of the main computing capacity consumers in the HEP software workflow is the data analysis. To optimize the resource usage, the concept of Analysis Facility (AF) for Run 3 has...
Go to contribution page -
Mohamed Hemdan27/10/2022, 16:10
Particle physics experiments spend large amounts of computational effort on Monte Carlo simulations. Due to the computational expense of simulations, they are often executed and stored in large distributed computing clusters. To lessen the computational cost, physicists have introduced alternatives to speed up the simulation. Generative Adversarial Networks (GANs) are an excellent...
Go to contribution page -
Josh Bendavid (CERN), Kenneth Long (Massachusetts Inst. of Technology (US))27/10/2022, 16:10
The unprecedented volume of data and Monte Carlo simulations at the HL-LHC will pose increasing challenges for data analysis both in terms of computing resource requirements as well as "time to insight". Precision measurements with present LHC data already face many of these challenges today. We will discuss performance scaling and optimization of RDataFrame for complex physics analyses,...
Go to contribution page -
Sophie Berkman27/10/2022, 16:10
Neutrino experiments that use liquid argon time projection chamber (LArTPC) detectors are growing bigger and expect to see more neutrinos with next generation beams, and therefore will require more computing resources to reach their physics goals of measuring CP violation in the neutrino sector and exploring anomalies. These resources can be used to their full capacity by incorporating...
Go to contribution page -
gianluigi chiarello27/10/2022, 16:10
Ultra-low mass and high granularity Drift Chambers fulfill the requirements for tracking systems of modern High Energy Physics experiments at the future high luminosity facilities (FCC-ee or CEPC).
Go to contribution page
\indent We present how, in Helium based gas mixtures, by measuring the arrival times of each individual ionization cluster and by using proper statistical tools, it is possible to perform a bias... -
Marcel Rieger (Hamburg University (DE))27/10/2022, 16:10
In particle physics, workflow management systems are primarily used as tailored solutions in dedicated areas such as Monte Carlo production. However, physicists performing data analyses are usually required to steer their individual, complex workflows manually, frequently involving job submission in several stages and interaction with distributed storage systems by hand. This process is not...
Go to contribution page -
Zef Wolffs (Nikhef National institute for subatomic physics (NL))27/10/2022, 16:10
RooFit is a toolkit for statistical modeling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analyses become more computationally demanding. Therefore, recent RooFit developments were focused on performance optimization, in particular to...
Go to contribution page -
Hosein Karimi Khozani (IHEP)27/10/2022, 16:10
There are established classical methods to reconstruct particle tracks from recorded hits on the particle detectors. Current algorithms do this either by cut in some features, like recorded time of the hits, or by the fitting process. This is potentially error prone and resource consuming. For high noise events, these issues are more critical and this method might even fail. We have been...
Go to contribution page -
Mr Jan Stephan27/10/2022, 16:10
The alpaka library is a header-only C++17 abstraction library for development across hardware accelerators (CPUs, GPUs, FPGAs). Its aim is to provide performance portability across accelerators through the abstraction (not hiding!) of the underlying levels of parallelism. In this talk we will show the concepts behind alpaka, how it is mapped to the various underlying hardware models, and show...
Go to contribution page -
Umit Sozbilir (Universita e INFN, Bari (IT))27/10/2022, 16:10
In recent years, new technologies and new approaches have been developed in academia and industry to face the necessity to both handle and easily visualize huge amounts of data, the so-called “big data”. The increasing volume and complexity of HEP data challenge the HEP community to develop simpler and yet powerful interfaces based on parallel computing on heterogeneous platforms. Good...
Go to contribution page -
Dmitry Popov (University of Chinese Academy of Sciences (CN))27/10/2022, 16:10
Monte Carlo simulation is a vital tool for all physics programmes of particle physics experiments. Their accuracy and reliability in reproducing detector response is of the utmost importance. For the LHCb experiment, which is embarking on a new data-take era with an upgraded detector, a full suite of verifications has been put in place for its simulation software to ensure the quality of the...
Go to contribution page -
Vasilis Belis (ETH Zurich (CH))27/10/2022, 16:10
We developed supervised and unsupervised quantum machine learning models for anomaly detection tasks at the Large Hadron Collider at CERN. Current Noisy Intermediate Scale Quantum (NISQ) devices have a limited number of qubits and qubit coherence. We designed dimensionality reduction models based on Autoencoders to accommodate the constraints dictated by the quantum hardware. Different designs...
Go to contribution page -
Giovanna Lazzari Miotto (Universidade Federál Do Rio Grande Do Sul (BR))27/10/2022, 16:10
Compared to LHC Run 1 and Run 2, future HEP experiments, e.g. at the HL-LHC, will increase the volume of generated data by an order of magnitude. In order to sustain the expected analysis throughput, ROOT's RNTuple I/O subsystem has been engineered to overcome the bottlenecks of the TTree I/O subsystem, focusing also on a compact data format, asynchronous and parallel requests, and a layered...
Go to contribution page -
Lorenzo Moneta (CERN)27/10/2022, 16:10
Through its TMVA package, ROOT provides and connects to machine learning tools for data analysis at HEP experiments and beyond. In addition, ROOT provides through its powerful I/O system and RDataFrame analysis tools the capability to efficiently select and query input data from large data sets as typically used in HEP analysis. At the same time, several existing Machine Learning tools exist...
Go to contribution page -
Irina Espejo Morales (New York University (US))27/10/2022, 16:10
MadMiner is a python module that implements a powerful family of multivariate inference techniques that leverage both matrix element information and machine learning.
This multivariate approach neither requires the reduction of high-dimensional data to summary statistics nor any simplifications to the under-lying physics or detector response.
In this paper, we address some of the...
Go to contribution page -
Nathalie Soybelman (Weizmann Institute of Science (IL)), Mr Nilotpal Kakati (Weizmann Institute of Science)27/10/2022, 16:10
The feature complexity of data recorded by particle detectors combined with the availability of large simulated datasets presents a unique environment for applying state-of-the-art machine learning (ML) architectures to physics problems. We present the Simplified Cylindrical Detector (SCD): a fully configurable GEANT4 calorimeter simulation which mimics the granularity and response...
Go to contribution page -
Felice Pantaleo (CERN)27/10/2022, 16:10
To sustain the harsher conditions of the high-luminosity LHC, the CMS Collaboration is designing a novel endcap calorimeter system. The new calorimeter will predominantly use silicon sensors to achieve sufficient radiation tolerance and will maintain highly granular information in the readout to help mitigate the effects of the pile up. In regions characterized by lower radiation levels, small...
Go to contribution page -
Elham E Khoda (University of Washington (US))27/10/2022, 16:10
Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent...
Go to contribution page -
Raquel Pezoa Rivera (Universidad de Valparaíso)27/10/2022, 16:10
The classification of HEP events, or separating signal events from the background, is one of the most important analysis tasks in High Energy Physics (HEP), and a foundational task in the search for new phenomena. Complex deep learning-based models have been fundamental for achieving accurate and outstanding performance in this classification task. However, the quantification of the...
Go to contribution page -
Simone Pigazzini (ETH Zurich (CH))27/10/2022, 16:40
The CMS ECAL has achieved an impressive performance during the LHC Run1 and Run2. In both runs, the ultimate performance has been reached after a lengthy calibration procedure required to correct ageing-induced changes in the response of the channels. The CMS ECAL will continue its operation far beyond the ongoing LHC Run3: its barrel section will be upgraded for the LHC Phase-2 and it will be...
Go to contribution page -
Juan Carlos Criado (Durham University)27/10/2022, 16:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Quantum annealing provides an optimization framework with the potential to outperform classical algorithms in finding the global minimum of non-convex functions. The availability of quantum annealers with thousands of qubits makes it possible today to tackle real-world problems using this technology. In this talk, I will review the quantum annealing paradigm and its use in the minimization of...
Go to contribution page -
Manasvi Goyal (Delhi Technological University)27/10/2022, 16:40Track 1: Computing Technology for Physics ResearchOral
There are undeniable benefits of binding Python and C++ to take advantage of the best features of both languages. This is especially relevant to the HEP and other scientific communities that have invested heavily in the C++ frameworks and are rapidly moving their data analyses to Python.
The version 2 of Awkward Array, a Scikit-HEP Python library, introduces a set of header-only C++...
Go to contribution page -
Bernhard Manfred Gruber (Technische Universitaet Dresden (DE)), Guilherme Amadio (CERN), Stephan Hageboeck (CERN)27/10/2022, 17:00Track 1: Computing Technology for Physics ResearchOral
Particle transport simulations are a cornerstone of high-energy physics (HEP), constituting almost half of the entire computing workload performed in HEP. To boost the simulation throughput and energy efficiency, GPUs as accelerators have been explored in recent years, further driven by the increasing use of GPUs on HPCs. The Accelerated demonstrator of electromagnetic Particle Transport...
Go to contribution page -
Diana McSpadden (Jefferson Lab)27/10/2022, 17:00
We have developed and implemented a machine learning based system to calibrate and control the GlueX Central Drift Chamber at Jefferson Lab, VA, in near real-time. The system monitors environmental and experimental conditions during data taking and uses those as inputs to a Gaussian process (GP) with learned prior. The GP predicts calibration constants in order to recommend a high voltage (HV)...
Go to contribution page -
Andrea Pasquale (University of Milan)27/10/2022, 17:00Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Over the last 20 years, thanks to the development of quantum technologies, it has been
Go to contribution page
possible to deploy quantum algorithms and applications, that before were only
accessible through simulation, on real quantum hardware. The current devices available are often refereed to as noisy intermediate-scale quantum (NISQ) computers and they require
calibration routines in order to obtain... -
Andrea Bocci (CERN)27/10/2022, 17:20Track 1: Computing Technology for Physics ResearchOral
To achieve better computational efficiency and exploit a wider range of computing resources, the CMS software framework (CMSSW) has been extended to offload part of the physics reconstruction to NVIDIA GPUs, while the support for AMD and Intel GPUs is under development. To avoid the need to write, validate and maintain a separate implementation of the reconstruction algorithms for each...
Go to contribution page -
Abhirami Harilal (Carnegie-Mellon University (US))27/10/2022, 17:20
The online Data Quality Monitoring (DQM) system of the CMS electromagnetic calorimeter (ECAL) is a vital operations tool that allows ECAL experts to quickly identify, localize, and diagnose a broad range of detector issues that would otherwise hinder physics-quality data taking. Although the existing ECAL DQM system has been continuously updated to respond to new problems, it remains one step...
Go to contribution page -
Oriel Orphee Moira Kiss (Universite de Geneve (CH))27/10/2022, 17:20Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The variational quantum eigensolver (VQE) is an algorithm to compute ground and excited state energy of quantum many-body systems. A key component of the algorithm and an active research area is the construction of a parametrized trial wavefunction – a so called variational ansatz. The wavefunction parametrization should be expressive enough, i.e. represent the true eigenstate of a quantum...
Go to contribution page -
Ka Hei Martin Kwok (Fermi National Accelerator Lab. (US))27/10/2022, 17:40Track 1: Computing Technology for Physics ResearchOral
Utilizing the computational power of GPUs is one of the key ingredients to meet the computing challenges presented to the next generation of High-Energy Physics (HEP) experiments. Unlike CPUs, developing software for GPUs often involves using architecture-specific programming languages promoted by the GPU vendors and hence limits the platform that the code can run on. Various portability...
Go to contribution page -
Felix Wagner (HEPHY Vienna)27/10/2022, 17:40
Cryogenic phonon detectors are used by direct detection dark matter experiments to achieve sensitivity to light dark matter particle interactions. Such detectors consist of a target crystal equipped with a superconducting thermometer. The temperature of the thermometer and the bias current in its readout circuit need careful optimization to achieve optimal sensitivity of the detector. This...
Go to contribution page -
Mr Tejas Shinde (Deggendorf Institute of Technology)27/10/2022, 17:40Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The present work is based on the research within the framework of cooperation between Intel Labs and Deggendorf Institute of Technology, since the Intel® Quantum SDK (Software Development Kit) has recently released. Transport phenomena e.g. heat transfer and mass transfer are nowadays the most challenging unsolved problems in computational physics due to the inherent nature of fluid...
Go to contribution page -
Axel Naumann (CERN), Lucia Silvestris (Universita e INFN, Bari (IT))28/10/2022, 09:15
-
Wahid Redjeb, Wahid Redjeb (Rheinisch Westfaelische Tech. Hoch. (DE))28/10/2022, 09:30Plenary
See https://indico.cern.ch/event/1106990/contributions/4998162/
Go to contribution page -
Svenja Diekmann (Rheinisch Westfaelische Tech. Hoch. (DE))28/10/2022, 09:40
See https://indico.cern.ch/event/1106990/contributions/5097014/
Go to contribution page -
Nick Smith (Fermi National Accelerator Lab. (US))28/10/2022, 09:50
See https://indico.cern.ch/event/1106990/contributions/4991353/
Go to contribution page -
Yasumichi Aoki (RIKEN)28/10/2022, 10:00Plenary
The Japanese flagship supercomputer Fugaku started its operation in early 2021.
Go to contribution page
After one and half years of production runs it is producing some initial results in Lattice QCD applications, such as thermodynamics, heavy and light quark flavor physics, and hadron structures and interactions.
In this talk, we first touch on the basis of Fugaku and its software status.
Discussion is given on... -
Vassil Vasilev (Princeton University (US))28/10/2022, 10:30Plenary
Over the last decade the C++ programming language has evolved significantly into safer, easier to learn and better supported by tools general purpose programming language capable of extracting the last bit of performance from bare metal. The emergence of technologies such as LLVM and Clang have advanced tooling support for C++ and its ecosystem grew qualitatively. C++ has an important role in...
Go to contribution page -
Raquel Pezoa Rivera (Federico Santa Maria Technical University (CL))28/10/2022, 11:30Plenary
-
Jennifer Ngadiuba (FNAL)28/10/2022, 11:50
-
Anke Biekoetter (IPPP Durham)28/10/2022, 12:10Plenary
-
Axel Naumann (CERN), David Britton (University of Glasgow (GB))28/10/2022, 12:30Plenary
-
Stefan Kesselheim (Forschungszentrum Jülich)Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In Solar Power Plants, temperatures sufficient for chemical processes or the generation of electrical power are created by reflecting sunlight with thousands of mirrors ("heliostats") to a surface ("the receiver"). In operation, the temperature distribution on the receiver is critical for the performance and must be optimized. The heliostats are never perfectly flat as due to budget...
Go to contribution page -
Dr Antonio Augusto Alves Junior (KIT - IAP)
CORSIKA 8 is a Monte Carlo simulation framework to model ultra-high energy secondary particle cascades in astroparticle physics. This presentation is devoted to the advances in the parallelization of CORSIKA 8, which is being developed in modern C++ and is designed to run on multi-thread modern processors and accelerators, are discussed.
Aspects such as out-of-the-order particle shower...
Go to contribution page -
Jason Poh (University of Chicago)
Modern cosmology surveys are producing data at rates that are soon to surpass our capacity for exhaustive analysis – in particular for the case of strong gravitational lenses. While the Dark Energy Survey may discover thousands of galaxy-scale strong lenses, the upcoming Legacy Survey of Space and Time (LSST) will find hundreds of thousands more. These large numbers of objects will make strong...
Go to contribution page -
Plenary
-
Dr Ben Moews (Carnegie Mellon University & Pittsburgh Supercomputing Center)Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Physics-informed neural networks (PINNs) have emerged as a coherent framework to build predictive models that combine statistical patterns with domain knowledge. The underlying notion is to enrich the optimization loss function with known equations to constrain the space of possible model solutions. Successful applications cover a variety of areas, including physics, astronomy and...
Go to contribution page -
Raja Appuswamy (Eurecom)
-
Plenary
Choose timezone
Your profile timezone: