-
Dr Jerome LAURET (Brookhaven National Laboratory)11/03/2024, 09:00Plenary
-
Prof. Carl Lejuez (Stony Brook University)11/03/2024, 09:15Plenary
-
Abhay Deshpande11/03/2024, 09:30Plenary
-
Filipp Akopyan (IBM)11/03/2024, 10:10Plenary
-
Vincent Conitzer (University of Oxford)11/03/2024, 11:20Plenary
-
Aleksandra Ciprijanovic (Fermi National Accelerator Laboratory), Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US)), Gordon Watts (University of Washington (US)), Vincent Conitzer (CMU/Oxford)11/03/2024, 12:20Plenary
-
Dr Tao Lin11/03/2024, 14:30
Based on the tracking experience at LHC, the project, A Common Tracking Software (ACTS), aims to provide an open-source experiment-independent and framework-independent software designed for modern computing architectures. It provides a set of high-level performant track reconstruction tools which are agnostic to the details of the detection technologies and magnetic field configuration, and...
Go to contribution page -
Xiaomei Zhang (Chinese Academy of Sciences (CN))11/03/2024, 14:30Track 1: Computing Technology for Physics ResearchOral
The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment. JUNO will start to take data in the fall of 2024 with 2PB data each year. It is important that raw data is copied to permanent storage and distributed to multiple data center storage system in time for backup. To make available for re-reconstruction among these data centers, raw data also need to be...
Go to contribution page -
Andrea Pasquale (University of Milan)11/03/2024, 14:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Quantum technologies are moving towards the development of novel hardware devices
Go to contribution page
based on quantum bits (qubits). In parallel to the development of quantum devices, efficient simulation tools are needed in order to design and benchmark quantum algorithms and applications before deployment on quantum hardware.
In this context, we present a first attempt to perform circuit-based quantum... -
Wen Guan (Brookhaven National Laboratory (US))11/03/2024, 14:50Track 1: Computing Technology for Physics ResearchOral
The growing complexity of high energy physics analysis often involves running a large number of different tools. This demands a multi-step data processing approach, with each step requiring different resources and carrying dependencies on preceding steps. It’s important and useful to have a tool to automate these diverse steps efficiently.
Go to contribution page
With the Production and Distributed Analysis (PanDA)... -
Manos Vourliotis (Univ. of California San Diego (US))11/03/2024, 14:50
Charged particle reconstruction is one the most computationally heavy components of the full event reconstruction of Large Hadron Collider (LHC) experiments. Looking to the future, projections for the High Luminosity LHC (HL-LHC) indicate a superlinear growth for required computing resources for single-threaded CPU algorithms that surpass the computing resources that are expected to be...
Go to contribution page -
Dr Xiaozhong Huang (IHEP)11/03/2024, 14:50Track 1: Computing Technology for Physics ResearchOral
High-energy physics relies on large and accurate samples of simulated events, but generating these samples with GEANT4 is CPU intensive. The ATLAS experiment has employed generative adversarial networks (GANs) for fast shower simulation, which is an important approach to solving the problem. Quantum GANs, leveraging the advantages of quantum computing, have the potential to outperform standard...
Go to contribution page -
Natalia Diana Szczepanek (CERN)11/03/2024, 15:10Track 1: Computing Technology for Physics ResearchOral
As the scientific community continues to push the boundaries of computing capabilities, there is a growing responsibility to address the associated energy consumption and carbon footprint. This responsibility extends to the Worldwide LHC Computing Grid (WLCG), encompassing over 170 sites in 40 countries, supporting vital computing, disk, and tape storage for LHC experiments. Ensuring efficient...
Go to contribution page -
Zihan Zhao (Univ. of California San Diego (US))11/03/2024, 15:10
In particle physics, machine learning algorithms traditionally face a limitation due to the lack of truth labels in real data, restricting training to only simulated samples. This study addresses this challenge by employing self-supervised learning, which enables the utilization of vast amounts of unlabeled real data, thereby facilitating more effective training.
Our project is particularly...
Go to contribution page -
Dr Gianluca Bianco (Universita e INFN, Bologna (IT)), Simone Gasperini (Universita e INFN, Bologna (IT))11/03/2024, 15:10Track 1: Computing Technology for Physics ResearchOral
In High-Energy Physics (HEP) experiments, each measurement apparatus exhibit a unique signature in terms of detection efficiency, resolution, and geometric acceptance. The overall effect is that the distribution of each observable measured in a given physical process could be smeared and biased. Unfolding is the statistical technique employed to correct for this distortion and restore the...
Go to contribution page -
Dr Vardan Gyurjyan11/03/2024, 15:30Track 1: Computing Technology for Physics ResearchOral
Implementing a physics data processing application is relatively straightforward with the use of current containerization technologies and container image runtime services, which are prevalent in most high-performance computing (HPC) environments. However, the process is complicated by the challenges associated with data provisioning and migration, impacting the ease of workflow migration and...
Go to contribution page -
Kilian Lieret (Princeton University)11/03/2024, 15:30
The sub-optimal scaling of traditional tracking algorithms based on combinatorial Kalman filters causes performance concerns for future high-pileup experiments like the High Luminosity Large Hadron Collider. Graph Neural Network-based tracking approaches have been shown to significantly improve scaling at similar tracking performance levels. Rather than employing the popular edge...
Go to contribution page -
Matteo Robbiati (Università degli Studi e INFN Milano (IT))11/03/2024, 15:30Track 1: Computing Technology for Physics ResearchOral
The development of quantum computers as tools for computation and data analysis is continually increasing, even in the field of machine learning, where numerous routines and algorithms have been defined, leveraging the high expressiveness of quantum systems to process information. In this context, one of the most stringent limitations is represented by noise. In fact, the devices currently...
Go to contribution page -
michael goodrich11/03/2024, 15:50Track 1: Computing Technology for Physics ResearchOral
Thomas Jefferson National Accelerator Facility (JLab) has partnered with Energy Sciences Network (ESnet) to define and implement an edge to compute cluster data processing computational load balancing architecture. The ESnet-JLab FPGA Accelerated Transport (EJFAT) architecture focuses on FPGA acceleration to address compression, fragmentation, UDP packet destination redirection (Network...
Go to contribution page -
Volodymyr Svintozelskyi (IFIC - Valencia)11/03/2024, 15:50
A new algorithm, called "Downstream", has been developed at LHCb which is able to reconstruct and select very displaced vertices in real time at the first level of the trigger (HLT1). It makes use of the Upstream Tracker (UT) and the Scintillator Fiber detector (SciFi) of LHCb and it is executed on GPUs inside the Allen framework. In addition to an optimized strategy, it utilizes a Neural...
Go to contribution page -
Beojan Stanislaus (Lawrence Berkeley National Lab. (US))11/03/2024, 16:15
With the coming luminosity increase at the High Luminosity LHC, the ATLAS experiment will find itself facing a significant challenge in processing the hundreds of petabytes of data that will be produced by the detector.
The computing tasks faced by the LHC experiments such as ATLAS are primarily throughput limited, and our frameworks are optimized to run these on High Throughput Computing...
Go to contribution page -
Joshua Falco Beirer (CERN)11/03/2024, 16:15
As we are approaching the high-luminosity era of the LHC, the computational requirements of the ATLAS experiment are expected to increase significantly in the coming years. In particular, the simulation of MC events is immensely computationally demanding, and their limited availability is one of the major sources of systematic uncertainties in many physics analyses. The main bottleneck in the...
Go to contribution page -
Fang-Ying Tsai (Stony Brook University (US))11/03/2024, 16:15
In response to the rising CPU consumption and storage demands, as we enter a new phase in particle physics with the High-Luminosity Large Hadron Collider (HL-LHC), our efforts are centered around enhancing the CPU processing efficiency of reconstruction within the ATLAS inner detector. The track overlay approach involves pre-reconstructing pileup tracks and subsequently running reconstruction...
Go to contribution page -
Joshua Falco Beirer (Georg August Universitaet Goettingen (DE)), Joshua Falco Beirer (CERN)11/03/2024, 16:15
Simulation of the detector response is a major computational challenge in modern High Energy Physics experiments, as for example it accounts for about two fifths of the total ATLAS computing resources. Among simulation tasks, calorimeter simulation is the most demanding, taking up about 80% of resource use for simulation and expected to increase in the future. Solutions have been developed to...
Go to contribution page -
Manasvi Goyal (Princeton University (US))11/03/2024, 16:15
In some fields, scientific data formats differ across experiments due to specialized hardware and data acquisition systems. Researchers need to develop, document, and maintain specific analysis software to interact with these data formats. These software are often tightly coupled with a particular data format. This proliferation of custom data formats has been a prominent challenge for small...
Go to contribution page -
lei wang (Institute of High Energy Physics)11/03/2024, 16:15
The Institute of High Energy Physics' computing platform includes isolated grid sites and local clusters. Grid sites manage grid jobs from international experiments, including ATLAS, CMS, LHCb, BELLEII, JUNO, while the local cluster concurrently processes data from experiments leading by IHEP like BES, JUNO, LHAASO. These resources have distinct configurations, such as network segments, file...
Go to contribution page -
Marco Lorusso, Marco Lorusso, Marco Lorusso (Universita Di Bologna (IT))11/03/2024, 16:15
Among human activities that contribute to the environmental footprint of our species, computational footprint, i.e. the environmental impact that results from the employment of computing resources, might be one of the most underappreciated ones. While many modern scientific discoveries have been obtained thanks to the availability of more and more performing computers and algorithms, the...
Go to contribution page -
Rui Wang (Argonne National Laboratory (US))11/03/2024, 16:15
The computing challenges in collecting, storing, reconstructing, and analyzing the colossal volume of data produced by the ATLAS experiment and producing similar numbers of simulated Monte Carlo (MC) events put formidable requirements on the computing resources of the ATLAS collaboration. ATLAS currently expends around 40% of its CPU resources on detector simulation, in which half of the...
Go to contribution page -
Kyongmin Yeo (IBM Research)11/03/2024, 16:15
Here, we present deep generative models for the fast simulation of calorimeter shower events. Using a three-dimensional, cylindrical scoring mesh, a shower event is parameterized by the total energy deposited on each cell of the scoring mesh. Due to the three-dimensional geometry, to simulate a shower event, it is required to learn a complex probability distribution of $O(10^3) \sim O(10^4)$...
Go to contribution page -
Tianzi Song (Sun Yat-Sen University (CN))11/03/2024, 16:15
The visualization process of detector is one of the important problems in high energy physics (HEP) software. At present, the description of detectors in HEP is complicated. Industry professional visualization platforms such as Unity, have the most advanced visualization capabilities and technologies, which can help us to achieve the visualization of detectors. The work is to find an automated...
Go to contribution page -
Alexander Zoechbauer (CERN)11/03/2024, 16:15
The interTwin project, funded by the European Commission, is at the forefront of leveraging 'Digital Twins' across various scientific domains, with a particular emphasis on physics and earth observation. Two of the most advanced use-cases of interTwin are event generation for particle detector simulation at CERN as well as the climate-based Environmental Modelling and Prediction Platform...
Go to contribution page -
Kalana Wijethunga (University of Moratuwa (LK))11/03/2024, 16:15
In the realm of Grid middleware, efficient job matching is paramount, ensuring that tasks are seamlessly assigned to the most compatible worker nodes. This process hinges on meticulously evaluating a worker node's suitability for the given task, necessitating a thorough assessment of its infrastructure characteristics. However, adjusting job matching parameters poses a significant challenge...
Go to contribution page -
Tao Lin (Chinese Academy of Sciences (CN))11/03/2024, 16:15
The Jiangmen Underground Neutrino Observatory (JUNO) is currently under construction in southern China, with the primary goals of the determining the neutrino mass ordering and the precisely measurement of oscillation parameters. The data processing in JUNO is challenging. When JUNO starts data taking in the late 2024, the expected event rate is about 1 kHz, which is about 31.5 billions of...
Go to contribution page -
Aimilianos Koulouris (CERN)11/03/2024, 16:15
The ATLAS experiment at the Large Hadron Collider (LHC) operated very successfully in the years 2008 to 2023. ATLAS Control and Configuration (CC) software is the core part of the ATLAS Trigger and DAQ system, it comprises all the software required to configure and control the ATLAS data taking. It provides essentially the glue that holds the various ATLAS sub-systems together. During recent...
Go to contribution page -
lei wang (Institute of High Energy Physics)11/03/2024, 16:15
The new-generation light sources, such as the High Energy Photon Source (HEPS) under construction, are one of the advanced experimental platforms that facilitate breakthroughs in fundamental scientific research. These large scientific installations are characterized by numerous experimental beam lines (more than 90 at HEPS), rich research areas, and complex experimental analysis methods,...
Go to contribution page -
Jingshu Li (Sun Yat-Sen University (CN))11/03/2024, 16:15
In high-energy physics experiments, the software's visualization capabilities are crucial, aiding in detector design, assisting with offline data processing, offering potential for improving physics analysis, among other benefits. Detailed detector geometries and architectures, formatted in GDML or ROOT, are integrated into platforms like Unity for three-dimensional modeling. In this study,...
Go to contribution page -
lei wang (Institute of High Energy Physics)11/03/2024, 16:15
The HXMT satellite is China's first space astronomy satellite. It is a space-based X-ray telescope capable of broadband and large-field X-ray sky surveys, as well as the study of high-energy celestial objects such as black holes and neutron stars, focusing on short-term temporal variations and broadband energy spectra. It also serves as a highly sensitive all-sky monitor for gamma-ray bursts....
Go to contribution page -
Mr Digvijaysinh Ajarekar (Deggendorf Institute of Technology)11/03/2024, 16:50Track 1: Computing Technology for Physics ResearchOral
Artificial intelligence has been used for the real and fake art identification and different machine learning models are being trained then employed with acceptable accuracy in classifying artworks. As the future revolutionary technology, quantum computing opens a grand new perspective in the art area. Using Quantum Machine Learning (QML), the current work explores the utilization of Normal...
Go to contribution page -
Alaettin Serhan Mete (Argonne National Laboratory (US))11/03/2024, 16:50Track 1: Computing Technology for Physics ResearchOral
The ATLAS experiment at CERN’s Large Hadron Collider has been using ROOT TTree for over two decades to store all of its processed data. The ROOT team has developed a new I/O subsystem, called RNTuple, that will replace TTree in the near future. RNTuple is designed to adopt various technological advancements that happened in the last decade and be more performant from both the computational and...
Go to contribution page -
Haiwang Yu11/03/2024, 16:50
Liquid Argon Time Projection Chamber, or LArTPC, is a scalable tracking calorimeter featuring rich event topology information. It provides the core detector technology for many current and next-generation large-scale neutrino experiments, such as DUNE and the SBN program. In neutrino experiments, LArTPC faces numerous challenges in both hardware and software to achieve optimum performance. On...
Go to contribution page -
Vincent R. Pascuzzi (IBM Quantum)11/03/2024, 17:10Track 1: Computing Technology for Physics ResearchOral
The rise of parallel computing, in particular graphics processing units (GPU), and machine learning and artificial intelligence has led to unprecedented computational power and analysis techniques. Such technologies have been especially fruitful for theoretical and experimental physics research where the embarrassingly parallel nature of certain workloads — e.g., Monte Carlo event generation,...
Go to contribution page -
Wing Yan Ma (SDU)11/03/2024, 17:10
The Jiangmen Underground Neutrino Observatory (JUNO), located in Southern China, is a multi-purpose neutrino experiment that consists of a 20-kton liquid scintillator detector. The primary goal of the experiment is to determine the neutrino mass ordering (NMO) and measure other neutrino oscillation parameters to sub-percent precision. Atmospheric neutrinos are sensitive to NMO via matter...
Go to contribution page -
Florine de Geus (CERN)11/03/2024, 17:10Track 1: Computing Technology for Physics ResearchOral
Inspired by over 25 years of experience with the ROOT TTree I/O subsystem and motivated by modern hard- and software developments as well as an expected tenfold data volume increase with the HL-LHC, RNTuple is currently being developed as ROOT's new I/O subsystem. Its first production release is foreseen for late 2024, and various experiments have begun working on the integration of RNTuple...
Go to contribution page -
Jay Chan (Lawrence Berkeley National Lab. (US))11/03/2024, 17:30
Track reconstruction is a crucial task in particle experiments and is traditionally very computationally expensive due to its combinatorial nature. Many recent developments have explored new tracking algorithms in order to improve scalability in preparation of the HL-LHC. In particular, Graph neural networks (GNNs) have emerged as a promising approach due to the graph nature of particle...
Go to contribution page -
Marta Czurylo (CERN)11/03/2024, 17:30Track 1: Computing Technology for Physics ResearchOral
As the High-Luminosity LHC era is approaching, the work on the next-generation ROOT I/O subsystem, embodied by the RNTuple, is advancing fast with demonstrated implementations of the LHC experiments' data models and clear performance improvements over the TTree. Part of the RNTuple development is to guarantee no change in the RDataFrame analysis flow despite the change in the underlying data...
Go to contribution page -
Edoardo Pedicillo11/03/2024, 17:30Track 1: Computing Technology for Physics ResearchOral
Over the last 20 years, thanks to the development of quantum technologies, it has
Go to contribution page
been possible to deploy quantum algorithms and applications that before were only
accessible through simulation on real quantum hardware.
The current devices available are often referred to as noisy intermediate-scale
quantum (NISQ) computers, and they require calibration routines
in order to obtain... -
Amit Bashyal11/03/2024, 17:50Track 1: Computing Technology for Physics ResearchOral
As the role of High Performance Computers (HPC) increases in the High Energy Physics (HEP) experiments, the experiments will have to adopt HPC friendly storage format and data models to efficiently utilize these resources. In its first phase, the HEP-Center for Computational Excellence (HEP-CCE) has demonstrated that the complex HEP data products can be stored in the HPC native storage...
Go to contribution page -
Hideki Okawa (Chinese Academy of Sciences (CN))11/03/2024, 17:50
Tracking is one of the most crucial components of reconstruction in the collider experiments. It is known for high consumption of computing resources, and various innovations have been being introduced until now. Future colliders such as the High-Luminosity Large Hadron Collider (HL-LHC) will face further enormously increasing demand of the computing resources. Usage of cutting-edge artificial...
Go to contribution page -
12/03/2024, 09:00Plenary
-
Siavash Golkar (Flat Iron Institute)12/03/2024, 09:15Plenary
-
Nhan Tran (Fermi National Accelerator Lab. (US))12/03/2024, 09:45Plenary
-
Carole-Jean Wu (MLPerf)12/03/2024, 10:15Plenary
-
Marco Lorusso (Universita Di Bologna (IT))12/03/2024, 11:30Track 1: Computing Technology for Physics ResearchOral
In recent years, the scope of applications for Machine Learning, particularly Artificial Neural Network algorithms, has experienced an exponential expansion. This surge in versatility has uncovered new and promising avenues for enhancing data analysis in experiments conducted at the Large Hadron Collider at CERN. The integration of these advanced techniques has demonstrated considerable...
Go to contribution page -
Theo Heimel (Heidelberg University)12/03/2024, 11:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Theory predictions for the LHC require precise numerical phase-space integration and generation of unweighted events. We combine machine-learned multi-channel weights with a normalizing flow for importance sampling to improve classical methods for numerical integration. By integrating buffered training for potentially expensive integrands, VEGAS initialization, symmetry-aware channels, and...
Go to contribution page -
Philip Coleman Harris (Massachusetts Inst. of Technology (US))12/03/2024, 11:30
The emergence of models pre-trained on simple tasks and then fine-tuned to solve many downstream tasks has become a mainstay for the application of deep learning within a large variety of domains. The models, often referred to as foundation models, aim, through self-supervision, to simplify complex tasks by extracting the most salient features of the data through a careful choice of...
Go to contribution page -
Dr Vincenzo Padulano (CERN)12/03/2024, 11:50Track 1: Computing Technology for Physics ResearchOral
Recently, machine learning has established itself as a valuable tool for researchers to analyze their data and draw conclusions in vari- ous scientific fields, such as High Energy Physics (HEP). Commonly used machine learning libraries, such as Keras and PyTorch, might provide functionality for inference, but they only support their own models, are constrained by heavy dependencies and often...
Go to contribution page -
Xiangyang Ju (Lawrence Berkeley National Lab. (US))12/03/2024, 11:50
Particle detectors play a pivotal role in the field of high-energy physics. Traditionally, detectors are characterized by their responses to various particle types, gauged through metrics such as energy or momentum resolutions. While these characteristics are instrumental in determining particle properties, they fall short of addressing the initial challenge of reconstructing particles.
We...
Go to contribution page -
Javier Mariño Villadamigo12/03/2024, 11:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Unfolding is a transformative method that is key to analyze LHC data. More recently, modern machine learning tools enable its implementation in an unbinned and high-dimensional manner. The basic techniques to perform unfolding include event reweighting, direct mapping between distributions and conditional phase space sampling, each of them providing a way to unfold LHC data accounting for all...
Go to contribution page -
Matthias Vigl (Technische Universitat Munchen (DE))12/03/2024, 12:10
In this work we demonstrate that significant gains in performance and data efficiency can be achieved moving beyond the standard paradigm of sequential optimization in High Energy Physics (HEP). We conceptually connect HEP reconstruction and analysis to modern machine learning workflows such as pretraining, finetuning, domain adaptation and high-dimensional embedding spaces and quantify the...
Go to contribution page -
Daniel Thomas Murnane (Lawrence Berkeley National Lab. (US))12/03/2024, 12:10Track 1: Computing Technology for Physics ResearchOral
Track reconstruction is an essential element of modern and future collider experiments, including within the ATLAS detector. The HL-LHC upgrade of the ATLAS detector brings an unprecedented tracking challenge, both in terms of number of silicon hit cluster readouts, and throughput required for both high level trigger and offline track reconstruction. Traditional track reconstruction techniques...
Go to contribution page -
Nathan Huetsch (Heidelberg University, ITP Heidelberg)12/03/2024, 12:10Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The matrix element method is the LHC inference method of choice for limited statistics. We present a dedicated machine learning framework, based on efficient phase-space integration, a learned acceptance and transfer function. It is based on a choice of INN and diffusion networks, and a transformer to solve jet combinatorics. Bayesian networks allow us to capture network uncertainties,...
Go to contribution page -
Anna Hallin (University of Hamburg)12/03/2024, 12:30
Foundation models have revolutionized natural language processing, demonstrating exceptional capabilities in handling sequential data. Their ability to generalize across tasks and datasets offers promising applications in high energy physics (HEP). However, collider physics data, unlike language, involves both continuous and discrete data types, including four-vectors, particle IDs, charges,...
Go to contribution page -
Víctor Bresó Pla (University of Heidelberg)12/03/2024, 12:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
One of the biggest obstacles for machine learning algorithms that predict amplitudes from phase space points is the scaling with the number of interacting particles. The more particles there are in a given process, the more challenging it is for the model to provide accurate predictions for the matrix elements. We present a deep learning framework that is built to reduce the impact of this...
Go to contribution page -
Felipe Luan Souza De Almeida (Syracuse University (US))12/03/2024, 12:30Track 1: Computing Technology for Physics ResearchOral
The LHCb experiment at the Large Hadron Collider (LHC) is designed to perform high-precision measurements of heavy-hadron decays, which requires the collection of large data samples and a good understanding and suppression of multiple background sources. Both factors are challenged by a five-fold increase in the average number of proton-proton collisions per bunch crossing, corresponding to a...
Go to contribution page -
Nilotpal Kakati (Weizmann Institute of Science (IL))12/03/2024, 12:50
Accurately reconstructing particles from detector data is a critical challenge in experimental particle physics. The detector's spatial resolution, specifically the calorimeter's granularity, plays a crucial role in determining the quality of the particle reconstruction. It also sets the upper limit for the algorithm's theoretical capabilities. Super-resolution techniques can be explored as a...
Go to contribution page -
Wahid Bhimji12/03/2024, 12:50Track 1: Computing Technology for Physics ResearchOral
The Fair Universe project is building a large-compute-scale AI ecosystem for sharing datasets, training large models and hosting challenges and benchmarks. Furthermore, the project is exploiting this ecosystem for an AI challenge series focused on minimizing the effects of systematic uncertainties in High-Energy Physics (HEP), and on predicting accurate confidence intervals. This talk will...
Go to contribution page -
Mohammad Atif (Brookhaven National Laboratory)12/03/2024, 12:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In atmospheric physics, particle-resolved direct numerical simulation (PR-DNS) models constitute an important tool to study aerosol-cloud-turbulence interactions which are central to the prediction of weather and climate . They resolve the smallest turbulent eddies as well as track the development and motion of individual particles [1,2]. PR-DNS is expected to complement experimental and...
Go to contribution page -
13/03/2024, 09:00Plenary
-
Ilaria Luise (CERN)13/03/2024, 09:15Plenary
-
Tom Gibbs (Nvidia)13/03/2024, 09:45Plenary
-
Steve Helvie (Open Compute Project)13/03/2024, 10:15Plenary
-
Stefano Carrazza (CERN)13/03/2024, 11:30Plenary
-
J Michael Williams (Massachusetts Inst. of Technology (US))13/03/2024, 12:00Plenary
-
Marco Meyer-Conde (Osaka Metropolitan University (JP), Tokyo City University (JP))13/03/2024, 12:30Plenary
-
Jay Chan (Lawrence Berkeley National Lab. (US))13/03/2024, 14:30
Hadronization is a critical step in the simulation of high-energy particle and nuclear physics experiments. As there is no first principles understanding of this process, physically-inspired hadronization models have a large number of parameters that are fit to data. We propose an alternative approach that uses deep generative models, which are a natural replacement for classical techniques,...
Go to contribution page -
Dr Charles Leggett (Lawrence Berkeley National Lab (US))13/03/2024, 14:30Track 1: Computing Technology for Physics ResearchOral
GPUs have become the dominant source of computing power for HPCs and are increasingly being used across the High Energy Physics computing landscape for a wide variety of tasks. Though NVIDIA is currently the main provider of GPUs, AMD and Intel are rapidly increasing their market share. As a result, programming using a vendor-specific language such as CUDA can significantly reduce deployment...
Go to contribution page -
Stefano Carrazza (CERN)13/03/2024, 14:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We demonstrate some advantages of a top-bottom approach in the development of hardware-accelerated code by presenting the PDFFlow-VegasFlow-MadFlow software suite. We start with an autogenerated hardware-agnostic Monte Carlo generator, which is parallelized in the event axis. This allow us to take advantage of the parallelizable nature of Monte Carlo integrals even if we do not have control of...
Go to contribution page -
DAVIDE MARIA TAGLIABUE13/03/2024, 14:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In this talk, I describe how the nested soft-collinear subtraction scheme can be used to compute NNLO QCD corrections to the production of an arbitrary number of gluonic jets in hadron collisions. In particular, I show how to identify NLO-like recurring structures of infrared subtraction terms that in principle can be applied to any partonic process. As an example, I demonstrate the...
Go to contribution page -
Andrea Valenzuela Ramirez (CERN)13/03/2024, 14:50Track 1: Computing Technology for Physics ResearchOral
The CMSSW framework has been instrumental in data processing, simulation, and analysis for the CMS detector at CERN. It is expected to remain a key component of the CMS Offline Software for the foreseeable future. Consequently, CMSSW is under continuous development, with its integration system evolving to incorporate modern tools and keep pace with the latest software improvements in the High...
Go to contribution page -
Dmitrii Torbunov13/03/2024, 14:50
High Energy Physics (HEP) experiments rely on scientific simulation to develop reconstruction algorithms. Despite the remarkable fidelity of modern simulation frameworks, residual discrepancies between simulated and real data introduce a challenging domain shift problem. The existence of this issue raises significant concerns regarding the feasibility of implementing Deep Learning (DL) methods...
Go to contribution page -
Beojan Stanislaus (Lawrence Berkeley National Lab. (US))13/03/2024, 15:10Track 1: Computing Technology for Physics ResearchOral
High Performance Computing resources are increasingly prominent in the plans of funding agencies, and the tendency of these resources is now to rely primarily on accelerators such as GPUs for the majority of their FLOPS. As a result, High Energy Physics experiments must make maximum use of these accelerators in our pipelines to ensure efficient use of the resources available to us.
The...
Go to contribution page -
Dr Gaia Grosso (IAIFI, MIT)13/03/2024, 15:10
Statistical anomaly detection empowered by AI is a subject of growing interest at collider experiments, as it provides multidimensional and highly automatized solutions for signal-agnostic data quality monitoring, data validation and new physics searches.
Go to contribution page
AI-based anomaly detection techniques mainly rely on unsupervised or semi-supervised machine learning tasks. One of the most crucial and... -
Christian Gutschow (UCL (UK))13/03/2024, 15:10Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
The success of the LHC physics programme relies heavily on high-precision calculations. However, the increased computational complexity for high-multiplicity final states has been a growing cause for concern, with the potential to evolve into a debilitating bottleneck in the foreseeable future. We present a flexible and efficient approach for the simulation of collider events with multi-jet...
Go to contribution page -
Ianna Osborne (Princeton University)13/03/2024, 15:30Track 1: Computing Technology for Physics ResearchOral
In the realm of scientific computing, both Julia and Python have established themselves as powerful tools. Within the context of High Energy Physics (HEP) data analysis, Python has been traditionally favored, yet there exists a compelling case for migrating legacy software to Julia.
Go to contribution page
This talk focuses on language interoperability, specifically exploring how Awkward Array data structures can... -
Jiaxi Liu13/03/2024, 15:30
The Jiangmen Underground Neutrino Observatory (JUNO) is a next-generation large (20 kton) liquid-scintillator neutrino detector, which is designed to determine the neutrino mass ordering from its precise reactor neutrino spectrum measurement. Moreover, high-energy (GeV-level) atmospheric neutrino measurements could also improve its sensitivity to mass ordering via matter effects on...
Go to contribution page -
Dr Enrico Bothmann (U Goettingen)13/03/2024, 15:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Parton-level event generators are one of the most computationally demanding parts of the simulation chain for the Large Hadron Collider. The rapid deployment of computing hardware different from the traditional CPU+RAM model in data centers around the world mandates a change in event generator design. These changes are required in order to provide economically and ecologically sustainable...
Go to contribution page -
Patrick Rieck (New York University (US))13/03/2024, 15:50
Supervised learning has been used successfully for jet classification and to predict a range of jet properties, such as mass and energy. Each model learns to encode jet features, resulting in a representation that is tailored to its specific task. But could the common elements underlying such tasks be combined in a single foundation model to extract features generically? To address this...
Go to contribution page -
Benedikt Hegner (CERN)13/03/2024, 15:50Track 1: Computing Technology for Physics ResearchOral
Detector studies for future experiments rely on advanced software tools to estimate performance and optimize their design and technology choices. Similarly, machine learning techniques require realistic data sets that allow estimating their performance beyond simplistic toy-models. The Key4hep software stack provides tools to perform detailed full simulation studies for a number of different...
Go to contribution page -
Herschel Chawdhry (Florida State University)13/03/2024, 15:50
Computer algebra plays an important role in particle physics calculations. In particular, the calculation and manipulation of large multi-variable polynomials and rational functions are key bottlenecks when calculating multi-loop scattering amplitudes. Recent years have seen the widespread adoption of interpolation techniques to target these bottlenecks. This talk will present new techniques...
Go to contribution page -
Maksymilian Graczyk (CERN)13/03/2024, 16:15
Considering the slowdown of Moore’s Law, an increasing need for energy efficiency, and larger and larger amounts of data to be processed in real-time and non-real-time in physics research, new computing paradigms are emerging in hardware and software. This requires bigger awareness from developers and researchers to squeeze as much performance as possible out of applications. However, this is...
Go to contribution page -
Bogdan Wiederspan (Hamburg University (DE))13/03/2024, 16:15
In a wide range of high-energy particle physics applications, machine learning methods have proven as powerful tools to enhance various aspects of physics data analysis. In the past years, various ML models were also integrated in central workflows of the CMS experiment, leading to great improvements in reconstruction and object identification efficiencies. However, the continuation of...
Go to contribution page -
Jim Pivarski (Princeton University)13/03/2024, 16:15
In the 5+ years since their inception, Uproot and Awkward Array have become cornerstones for particle physics analysis in Python, both as direct user interfaces and as base layers for physicist-facing frameworks. Although this means that the software is achieving its mission, it also puts the need for stability in conflict with new, experimental developments. Boundaries must be drawn between...
Go to contribution page -
Francesco Vaselli (Scuola Normale Superiore & INFN Pisa (IT))13/03/2024, 16:15
The software toolbox used for "big data" analysis in the last few years is changing fast. The adoption of approaches able to exploit the new hardware architectures plays a pivotal role in boosting data processing speed, resources optimisation, analysis portability and analysis preservation.
Go to contribution page
The scientific collaborations in the field of High Energy Physics (e.g. the LHC experiments, the... -
Tommaso Tedeschi (Universita e INFN, Perugia (IT))13/03/2024, 16:15
In the era of digital twins a federated system capable of integrating High-Performance Computing (HPC), High-Throughput Computing (HTC), and Cloud computing can provide a robust and versatile platform for creating, managing, and optimizing Digital Twin applications. One of the most critical problems involve the logistics of wide-area with multi stage workflows that move back and forth across...
Go to contribution page -
Ayan Roy13/03/2024, 16:15
Hydra is an extensible framework for training, managing, and deploying machine learning models for near real-time data quality monitoring. It is designed to take some of the burden off of shift crews by providing ‘round-the-clock’ monitoring of plots representing the data being collected. The Hydra system is backed by a database which is leveraged for near push button training and is...
Go to contribution page -
Michal Mazurek (CERN), Michał Mazurek13/03/2024, 16:15
In the LHCb experiment, during Run2, more than 90% of the computing resources available to the Collaboration were used for detector simulation. The detector and trigger upgrades introduced for Run3 allow to collect larger datasets that, in turn, will require larger simulated samples. Despite the use of a variety of fast simulation options, the demands for simulations will far exceed the...
Go to contribution page -
Daniele Passaro (SNS & INFN Pisa (IT))13/03/2024, 16:15
The increasing computing power and bandwidth of programmable digital devices opens new possibilities in the field of real-time processing of HEP data. LHCb is exploiting this technology advancements in various ways to enhance its capability for complex data reconstruction in real time. Amongst them is the real-time reconstruction of hits in the VELO pixel detector, by means of cluster-finding...
Go to contribution page -
Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))13/03/2024, 16:15
The Large Hadron Collider at CERN in Geneva is poised for a transformative upgrade, preparing to enhance both its accelerator and particle detectors. This strategic initiative is driven by the tenfold increase in proton-proton collisions anticipated for the forthcoming high-luminosity phase scheduled to start by 2029. The vital role played by the underlying computational infrastructure, the...
Go to contribution page -
Jeng-Yuan Tsai13/03/2024, 16:15
The integration of geographically diverse computing facilities involves dynamically allocating unused resources, relocating workflows, and addressing challenges in heterogeneous, distributed, and opportunistic compute provisioning. Key hurdles include effective resource management, scheduling, data transfer optimization, latency reduction, and ensuring security and privacy. Our proposed...
Go to contribution page -
Robin Hofsaess (KIT - Karlsruhe Institute of Technology (DE))13/03/2024, 16:15
Today, the Worldwide LHC Computing Grid (WLCG) provides the majority of compute resources for the High Energy Physics (HEP) community. With its homogeneous Grid centers all around the world trimmed to a high throughput of data, it is tailored to support typical HEP workflows, offering an optimal environment for efficient job execution.
Go to contribution page
... -
Gabor David (Stony Brook University), Maxim Potekhin (Brookhaven National Laboratory (US))13/03/2024, 16:15
The PHENIX Collaboration has actively pursued a Data and Analysis Preservation program since 2019, the first such dedicated effort at RHIC. A particularly challenging aspect of this endeavor is preservation of complex physics analyses, selected for their scientific importance and the value of the specific techniques developed as a part of the research. For this, we have chosen one of the most...
Go to contribution page -
Tommaso Tedeschi (Universita e INFN, Perugia (IT))13/03/2024, 16:15
The need to interject, process and analyze large datasets in an as-short-as-possible amount of time is typical of big data use cases. The data analysis in High Energy Physics at CERN in particular will require, ahead of the next phase of high-luminosity at LHC, access to big amounts of data (order of 100 PB/year). However, thanks to continuous developments on resource handling and software, it...
Go to contribution page -
Gordon Watts (University of Washington (US))13/03/2024, 16:15
Particle physics faces many challenges and opportunities in the coming decades, as reflected by the Snowmass Community Planning Process, which produced about 650 reports on various topics. These reports are a valuable source of information, but they are also difficult to access and query. In this work, we explore the use of Large Language Models (LLMs) and Retrieval Augmented Generation (RAG)...
Go to contribution page -
Vardan Gyurjyan (Jefferson Lab), Dr Vardan Gyurjyan13/03/2024, 16:15
The Thomas Jefferson National Accelerator Facility (JLab) has created and is currently working on various tools to facilitate streaming readout (SRO) for upcoming experiments. These include reconstruction frameworks with support for Artificial Intelligence/Machine Learning, distributed High Throughput Computing (HTC), and heterogeneous computing which all contribute significantly to swift data...
Go to contribution page -
Alina Lazar (Youngstown State University)13/03/2024, 16:15
Graph Neural Networks (GNNs) have demonstrated significant performance in addressing the particle track-finding problem in High-Energy Physics (HEP). Traditional algorithms exhibit high computational complexity in this domain as the number of particles increases. This poster addresses the challenges of training GNN models on large, rapidly evolving datasets, a common scenario given the...
Go to contribution page -
Kyungeon Choi (University of Texas at Austin (US))13/03/2024, 16:15
Effective data extraction has been one of major challenges in physics analysis and will be more important in the High-Luminosity LHC era. ServiceX provides a novel data access and delivery by exploiting industry-driven software and recent high-energy physics software in the python ecosystem. An experiment-agnostic nature of ServiceX will be described by introducing various types of transformer...
Go to contribution page -
Kalana Wijethunga (University of Moratuwa (LK))13/03/2024, 16:15
The ALICE experiment's Grid resources vary significantly in terms of memory capacity, CPU cores, and resource management. Memory allocation for scheduled jobs depends on the hardware constraints of the executing machines, system configurations, and batch queuing policies. The O2 software framework introduces multi-core tasks where deployed processes share resources. To accommodate these new...
Go to contribution page -
Alexandra Golub (University of Washington (US))13/03/2024, 16:15
The search for long lived particles, a common extension to the Standard Model, requires a sophisticated neural network design, one that is able to accurately discriminate between signal and background. At the LHC's ALTAS experiment, beam induced background (BIB) and QCD jets are the two significant sources of background. For this purpose, a recurrent neural network (RNN) with an adversary was...
Go to contribution page -
Christian Kiesling (Max Planck Institut für Physik (DE))13/03/2024, 16:50
We describe the principles and performance of the first-level ("L1") hardware track trigger of Belle II, based on neural networks. The networks use as input the results from the standard \belleii trigger, which provides ``2D'' track candidates in the plane transverse to the electron-positron beams. The networks then provide estimates for the origin of the 2D track candidates in direction of...
Go to contribution page -
Joshua Falco Beirer (CERN)13/03/2024, 16:50Track 1: Computing Technology for Physics ResearchOral
To increase the number of Monte Carlo simulated events that can be produced with the limited CPU resources available, the ATLAS experiment at CERN uses a variety of fast simulation tools in addition to the detailed simulation of the detector response with Geant4. The tools are deployed in a heterogeneous simulation infrastructure known as the Integrated Simulation Framework (ISF), which was...
Go to contribution page -
Piyush Raikwar (CERN)13/03/2024, 17:10Track 1: Computing Technology for Physics ResearchOral
Recently, transformers have proven to be a generalised architecture for various data modalities, i.e., ranging from text (BERT, GPT3), time series (PatchTST) to images (ViT) and even a combination of them (Dall-E 2, OpenAI Whisper). Additionally, when given enough data, transformers can learn better representations than other deep learning models thanks to the absence of inductive bias, better...
Go to contribution page -
Abhijith Gandrakota (Fermi National Accelerator Lab. (US))13/03/2024, 17:10
Navigating the demanding landscapes of real-time and offline data processing at the Large Hadron Collider (LHC) requires the deployment of fast and robust machine learning (ML) models for advancements in Beyond Standard Model (SM) discovery. This presentation explores recent breakthroughs in this realm, focusing on the use of knowledge distillation to imbue efficient model architectures with...
Go to contribution page -
Mustafa Andre Schmidt (Bergische Universitaet Wuppertal (DE))13/03/2024, 17:30Track 1: Computing Technology for Physics ResearchOral
The ATLAS experiment at the LHC heavily depends on simulated event samples produced by a full Geant4 detector simulation. This Monte Carlo (MC) simulation based on Geant4 was a major consumer of computing resources during the 2018 data-taking year and is anticipated to remain one of the dominant resource users in the HL-LHC era. ATLAS has continuously been working to improve the computational...
Go to contribution page -
Alexander Bogatskiy (Flatiron Institute, Simons Foundation), Timothy Hoffman13/03/2024, 17:30
Equivariant models have provided state-of-the-art performance in many ML applications, from image recognition to chemistry and beyond. In particle physics, the relevant symmetries are permutations and the Lorentz group, and the best-performing networks are either custom-built Lorentz-equivariant architectures or more generic large transformer models. A major unanswered question is whether the...
Go to contribution page -
Francesco Vaselli (Scuola Normale Superiore & INFN Pisa (IT))13/03/2024, 17:50Track 1: Computing Technology for Physics ResearchOral
The simulation of high-energy physics collision events is a key element for data analysis at present and future particle accelerators. The comparison of simulation predictions to data allows us to look for rare deviations that can be due to new phenomena not previously observed. We show that novel machine learning algorithms, specifically Normalizing Flows and Flow Matching, can be effectively...
Go to contribution page -
Monica Dessole (EP SFT)13/03/2024, 17:50
In recent years, we have seen a rapid increase in the variety of computational architectures, featuring GPUs from multiple vendors, a trend that will likely continue in the future with the rise of possibly new accelerators. The High Energy Physics (HEP) community employs a wide variety of algorithms for accelerators which are mostly vendor-specific, but there is a compelling demand to expand...
Go to contribution page -
Christian Pflaum (Cerabyte), Steffen Hellmold (Cerabyte, Inc.)14/03/2024, 09:00Plenary
-
Stefanie Lumnitz (ESA)14/03/2024, 09:30Plenary
-
Joosep Pata (National Institute of Chemical Physics and Biophysics (EE))14/03/2024, 10:00Plenary
-
Norman Christ (Columbia University)14/03/2024, 10:30Plenary
-
Aobo Li (UCSD)14/03/2024, 11:30Plenary
-
Lindsey Gray (Fermi National Accelerator Lab. (US))14/03/2024, 12:00Plenary
-
Lukas Alexander Heinrich (Technische Universitat Munchen (DE))14/03/2024, 12:30Plenary
-
Yuan-Tang Chou (University of Washington (US))14/03/2024, 14:30Track 1: Computing Technology for Physics ResearchOral
Recent advancements in track finding within the challenging environments expected in the High-Luminosity Large Hadron Collider (HL-LHC) have showcased the potential of Graph Neural Network (GNN)-based algorithms. These algorithms exhibit high track efficiency and reasonable resolutions, yet their computational burden on CPUs hinders real-time processing, necessitating the integration of...
Go to contribution page -
Peter Boyle14/03/2024, 14:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
I discuss software and algorithm development work in the lattice gauge theory community to develop performance portable software across a range of GPU architectures (Nvidia, AMD and Intel) and corresponding multi scale aware algorithm research to accelerate computation.
Go to contribution page
An example is given of a large effort to calculate the hadronic vacuum polarisation contribution to the anomalous magnetic... -
Linghui Wu (Chinese Academy of Sciences (CN))14/03/2024, 14:30
Stable operation of the detector is essential for high quality data taking in high energy physics experiment. But it is not easy to keep the detector always running stably during data taking period in environment with high beam induced background. In the BESIII experiment, serious beam related background may cause instability of the high voltages in the drift chamber which is the innermost sub...
Go to contribution page -
Wenxing Fang14/03/2024, 14:50
Measuring neutrino mass ordering (NMO) poses a fundamental challenge in neutrino physics. To address this, the Jiangmen Underground Neutrino Observatory (JUNO) experiment is scheduled to commence data collection in late 2024, with the ambitious goal of determining the NMO at a 3-sigma confidence level within a span of 6 years. A key factor in achieving this is ensuring a high-quality energy...
Go to contribution page -
Dmytro Kresan (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))14/03/2024, 14:50Track 1: Computing Technology for Physics ResearchOral
The scientific program of the future FAIR accelerator covers a broad spectrum of topics in modern nuclear and atomic physics. This diversity leads to a multitude of use cases and workflows for the analysis of experimental data and simulations. To meet the needs of such a diverse user group, a flexible and transparent High-Performance Computing (HPC) system is required to accommodate all FAIR...
Go to contribution page -
Yannick Ulrich (Universitaet Bern (CH))14/03/2024, 14:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
McMule, a Monte Carlo for MUons and other LEptons, implements many major QED processes at NNLO (eg. $ee\to ee$, $e\mu\to e\mu$, $ee\to\mu\mu$, $\ell p\to \ell p$, $\mu\to\nu\bar\nu e$) including effects from the lepton masses. This makes McMule suitable for predictions for low-energy experiments such as MUonE, CMD-III, PRad, or MUSE.
Recently, McMule gained...
Go to contribution page -
Tommaso Tedeschi (Universita e INFN, Perugia (IT))14/03/2024, 15:10
The CMS experiment has recently established a new Common Analysis Tools (CAT) group. The CAT group implements a forum for the discussion, dissemination, organization and development of analysis tools, broadly bridging the gap between the CMS data and simulation datasets and the publication-grade plots and results. In this talk we discuss some of the recent developments carried out in the...
Go to contribution page -
Tim Voigtlaender (KIT - Karlsruhe Institute of Technology (DE))14/03/2024, 15:10Track 1: Computing Technology for Physics ResearchOral
With the increasing usage of Machine Learning (ML) in High Energy Physics (HEP), the breadth of new analyses with a large spread in compute resource requirements, especially when it comes to GPU resources. For institutes, like the Karlsruhe Institute of Technology (KIT), that provide GPU compute resources to HEP via their batch systems or the Grid, a high throughput, as well as energy...
Go to contribution page -
Sergey Volkov14/03/2024, 15:10Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Total 5-loop quantum electrodynamics calculation results for the electron anomalous magnetic moment will be presented. These results provide the first check of the previously known value obtained by T. Aoyama, M. Hayakawa, T. Kinoshita, M. Nio. A comparison will be provided. The results for the Feynman diagrams without lepton loops were presented by the author in 2018-2019. The remaining part...
Go to contribution page -
Dr Elise de Doncker (Western Michigan University)14/03/2024, 15:30Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
We recently explored methods for 2-loop Feynman integrals in the Euclidean or physical kinematical region, using numerical extrapolation and adaptive iterated integration. Our current goal is to address 3-loop two-point integrals with up to 6 internal lines.
Using double extrapolation, the integral $\mathcal I$ is approximated numerically by the limit of a sequence of integrals $\mathcal...
Go to contribution page -
Christian Gutschow (UCL (UK))14/03/2024, 15:30
In the contemporary landscape of advanced statistical analysis toolkits, ranging from Bayesian inference to machine learning, the seemingly straightforward concept of a histogram often goes unnoticed. However, the power and compactness of partially aggregated, multi-dimensional summary statistics with a fundamental connection to differential and integral calculus make them formidable...
Go to contribution page -
Katarzyna Schmidt (University of Silesia (PL))14/03/2024, 15:30Track 1: Computing Technology for Physics ResearchOral
The NA61/SHINE experiment is a prominent venture in high-energy physics, located at the SPS accelerator within CERN. Recently, the experiment's physics program has been extended, which necessitated the upgrade of detector hardware and software for new physics purposes.
The upgrade included a fundamental modification of the readout electronics (front-end) in the detecting system core of the...
Go to contribution page -
Yeonju Go (Brookhaven National Laboratory (US))14/03/2024, 15:50Track 1: Computing Technology for Physics ResearchOral
AI generative models, such as generative adversarial networks (GANs), variational auto-encoders, and normalizing flows, have been widely used and studied as efficient alternatives for traditional scientific simulations, such as Geant4. However, they have several drawbacks such as training instability and unable to cover the entire data distribution especially for the region where data are...
Go to contribution page -
Gordon Watts (University of Washington (US))14/03/2024, 15:50
Declarative Analysis Languages (DALs) are a paradigm for high-energy physics analysis that separates the desired results from the implementation details. DALs enable physicists to use the same software to work with different experiment's data formats, without worrying about the low-level details or the software infrastructure available. DALs have gained popularity since the HEP Analysis...
Go to contribution page -
Federica Devoto14/03/2024, 15:50Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
In this talk I will present recent developments on the calculation of five-point scattering amplitudes in massless QCD beyond the leading-colour approximation.
Go to contribution page
I will discuss the methodology that we pursued to compute these highly non-trivial amplitudes. In this respect, I will argue that it is possible to tackle and tame the seemingly intractable algebraic complexity at each step of the... -
Mohammad Atif (Brookhaven National Laboratory)14/03/2024, 16:10
We present a framework based on Catch2 to evaluate performance of OpenMP's target offload model via micro-benchmarks. The compilers supporting OpenMP's target offload model for heterogeneous architectures are currently undergoing rapid development. These developments influence performance of various physics applications in different ways. This framework can be employed to track the impact of...
Go to contribution page -
Sitian Qian (Peking University (CN))14/03/2024, 16:10
The diffusion model has demonstrated promising results in image generation, recently becoming mainstream and representing a notable advancement for many generative modeling tasks. Prior applications of the diffusion model for both fast event and detector simulation in high energy physics have shown exceptional performance, providing a viable solution to generate sufficient statistics within a...
Go to contribution page -
Ozgur Ozan Kilic (Brookhaven National Laboratory)14/03/2024, 16:10
With the increased integration of machine learning and the need for the scale of high-performance computing infrastructures, scientific workflows are undergoing a transformation toward greater heterogeneity. In this evolving landscape, adaptability has emerged as a pivotal factor in accelerating scientific discoveries through efficient execution of workflows. To increase resource utilization,...
Go to contribution page -
Julien Esseiva (Lawrence Berkeley National Lab. (US))14/03/2024, 16:10
Celeritas is a Monte Carlo (MC) detector simulation library that exploits current and future heterogeneous leadership computing facilities (LCFs). It is specifically designed for, but not limited to, High-Luminosity Large Hadron Collider (HL-LHC) simulations. Celeritas implements full electromagnetic (EM) physics, supports complex detector geometries, and runs on CPUs and Nvidia or AMD GPUs....
Go to contribution page -
Bogdan Wiederspan (Hamburg University (DE))14/03/2024, 16:10
To study and search for increasingly rare physics processes at the LHC, a staggering amount of data needs to be analyzed with progressively complex methods. Analyses involving tens of billions of recorded and simulated events, multiple machine learning algorithms for different purposes, and an amount of 100 or more systematic variations are no longer uncommon. These conditions impose a complex...
Go to contribution page -
lei wang (Institute of High Energy Physics)14/03/2024, 16:10
High Energy Photon Source (HEPS) is a crucial scientific research facility that necessitates efficient, reliable, and secure services to support a wide range of experiments and applications. However, traditional physical server-based deployment methods suffer from issues such as low resource utilization, limited scalability, and high maintenance costs.Therefore, the objective of this study is...
Go to contribution page -
Zoë Bilodeau (Princeton University (US))14/03/2024, 16:10
When working with columnar data file formats, it is easy for users to devote too much time to file manipulation. With Python, each file conversion requires multiple lines of code and the use of multiple I/O packages. Some conversions are a bit tricky if the user isn’t very familiar with certain formats, or if they need to work with data in smaller batches for memory management. To try and...
Go to contribution page -
Qingbao Hu (IHEP)14/03/2024, 16:10
Traditionally, data centers provide computing services to the outside world, and their security policies are usually separated from the outside world based on firewalls and other security defense boundaries. Users access the data center intranet through VPN, and individuals or endpoints connected through remote methods receive a higher level of trust to use computing services than individuals...
Go to contribution page -
Francesco Minarini, Dr Gianluca Bianco (Universita e INFN, Bologna (IT)), Simone Gasperini (Universita e INFN, Bologna (IT))14/03/2024, 16:10
The environmental impact of computing activities is starting to be acknowledged as relevant and several scientific initiatives and research lines are gathering momentum in the scientific community to identify and curb it. Governments, industries, and commercial businesses are now holding high expectations for quantum technologies as they have the potential to create greener and faster methods...
Go to contribution page -
Fabio Andrijauskas (Univ. of California San Diego (US))14/03/2024, 16:10
Extensive data processing is becoming commonplace in many fields of science, especially in computational physics. Distributing data to processing sites and providing methods to share the data with others efficiently has become essential. The Open Science Data Federation (OSDF) builds upon the successful StashCache project to create a global data distribution network. The OSDF expands the...
Go to contribution page -
Michal Mazurek (CERN)14/03/2024, 16:10
Efficient fast simulation techniques in high energy physics experiments are crucial in order to produce the necessary amount of simulated samples. We present a new component in the Gaussino core simulation framework that facilitates the integration of fast simulation hooks in Geant4 with machine learning serving, based on Gaudi’s scheduling and data processing tools. The implementation...
Go to contribution page -
Yongbin Feng (Fermi National Accelerator Lab. (US))14/03/2024, 16:10
Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that...
Go to contribution page -
Tianle Wang (Brookhaven National Lab)14/03/2024, 16:10
There is a significant expansion in the variety of hardware architectures these years, including different GPUs and other specialized computing accelerators. For better performance portability, various programming models are developed across those computing systems, including Kokkos, SYCL, OpenMP, and others. Among these programming models, the C++ standard parallelism (std::par) has gained...
Go to contribution page -
Daniel Lersch (Jefferson Lab)14/03/2024, 16:10
As part of the Scientific Discovery through Advanced Computing (SciDAC) program, the Quantum Chromodynamics Nuclear Tomography (QuantOM) project aims to analyze data from Deep Inelastic Scattering (DIS) experiments conducted at Jefferson Lab and the upcoming Electron Ion Collider. The DIS data analysis is performed on an event-level by combining the input from theoretical and experimental...
Go to contribution page -
Marc Tost (University of Texas at Austin (US))14/03/2024, 16:10
As the LHC continues to collect larger amounts of data, and in light of the upcoming HL-LHC, using tools that allow efficient and effective analysis of HEP data becomes more and more important. We present a test of the applicability and user-friendliness of several columnar analysis tools, most notably ServiceX and Coffea, by completing a full Run-2 ATLAS analysis. Working collaboratively with...
Go to contribution page -
Dr Lei Wang (Institute of High Energy Physics)14/03/2024, 16:10
APS at USA completed the high-precision training and inference in Nvidia GPU clusters taking the ptychoNN algorithm combined with ePIE Conjugate Gradient method. By the reference of that idea, we came up with a new model called W1-Net whose training speed was faster with higher precision of inference. After this development, we implemented the model onto DCU cluster. However, the performance...
Go to contribution page -
Matthias Vigl (Technische Universitat Munchen (DE))14/03/2024, 16:10
The ATLAS experiment at the LHC relies on crucial tools written in C++ to calibrate physics objects and estimate systematic uncertainties in the event-loop analysis environment. However, these tools face compatibility challenges with the columnar analysis paradigm that operates on many events at once in Python/Awkward or RDataFrame environments. Those challenges arise due to the intricate...
Go to contribution page -
Ali Garabaglu (University of Washington (US))14/03/2024, 16:50Track 1: Computing Technology for Physics ResearchOral
FASER, the ForwArd Search ExpeRiment, is an LHC experiment located 480 m downstream of the ATLAS interaction point along the beam collision axis. FASER has been taking collision data since the start of LHC Run3 in July 2022. The first physics results were presented in March 2023 [1,2], including the first direct observation of collider neutrinos. FASER includes four identical tracker stations...
Go to contribution page -
Dr Juehang Qin (Rice University)14/03/2024, 16:50
Traditionally, analysis of data from experiments such as LZ and XENONnT have relied on summary statistics of large sets of simulated data, generated using emissions models for particle interactions in liquid xenon such as NEST. As these emissions models are probabilistic in nature, they are a natural candidate to be implemented in a probabilistic programming framework. This would also allow...
Go to contribution page -
Qiulan Huang (Brookhaven National Laboratory (US))14/03/2024, 17:10
Scientific experiments and computations, particularly in Nuclear Physics (NP) and High Energy Physics (HEP) programs, are generating and accumulating data at an unprecedented rate. Big data presents opportunities for groundbreaking scientific discoveries. However, managing this vast amount of data cost-effectively while facilitating efficient data analysis within a large-scale, multi-tiered...
Go to contribution page -
Francesca Pastore (Royal Holloway, University of London)14/03/2024, 17:10Track 1: Computing Technology for Physics ResearchOral
The ATLAS experiment at CERN will be upgraded for the "High Luminosity LHC", with collisions due to start in 2029. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to 7.5 x 10e34 cm^-2s^-1, resulting in higher pileup and data rates. This increase brings new requirements and challenges for the trigger...
Go to contribution page -
Aimilianos Koulouris (CERN)14/03/2024, 17:30Track 1: Computing Technology for Physics ResearchOral
The ATLAS trigger system will be upgraded for the Phase 2 period of LHC operation. This system will include a Level-0 (L0) trigger based on custom electronics and firmware, and a high-level software trigger running on off-the-shelf hardware. The upgraded L0 trigger system uses information from the calorimeters and the muon trigger detectors. Once information from all muon trigger sectors has...
Go to contribution page -
Daniel Lersch (Jefferson Lab)14/03/2024, 17:30
As part of the Scientific Discovery through Advanced Computing (SciDAC) program, the Quantum Chromodynamics Nuclear Tomography (QuantOM) project aims to analyze data from Deep Inelastic Scattering (DIS) experiments conducted at Jefferson Lab and the upcoming Electron Ion Collider. The DIS data analysis is performed on an event-level by leveraging nuclear theory models and accounting for...
Go to contribution page -
Manfred Peter Fackeldey (RWTH Aachen University (DE))14/03/2024, 17:50
dilax is a software package for statistical inference using likelihood
Go to contribution page
functions of binned data. It fulfils three key concepts: performance,
differentiability, and object-oriented statistical model building.
dilax is build on JAX - a powerful autodifferentiation Python frame-
work. By making every component in dilax a “PyTree”, each compo-
nent can be jit-compiled (jax.jit), vectorized... -
Thomas Boettcher (University of Cincinnati (US))14/03/2024, 17:50Track 1: Computing Technology for Physics ResearchOral
Since 2022, the LHCb detector is taking data with a full software trigger at the LHC proton-proton collision rate, implemented in GPUs in the first stage and CPUs in the second stage. This setup allows to perform the alignment & calibration online and to perform physics analyses directly on the output of the online reconstruction, following the real-time analysis paradigm.
Go to contribution page
This talk will give... -
Ao Xu (Universita & INFN Pisa (IT))14/03/2024, 18:10Track 1: Computing Technology for Physics ResearchOral
Finding track segments downstream of the magnet is an important and computationally expensive task, that LHCb has recently ported to the first stage of its new GPU-based trigger of the LHCb Upgrade I. These segments are essential to form all good physics tracks with precision momentum measurement, when combined with those reconstructed in the vertex track detector, and to reconstruct...
Go to contribution page -
Deborah Bard (NERSC)15/03/2024, 08:45Plenary
-
Yi-Zhuang You (UCSD)15/03/2024, 09:15Plenary
-
Herschel Chawdhry (Florida State University)15/03/2024, 09:45Plenary
-
Jack Clayton (Modular)15/03/2024, 10:15Plenary
-
Tianzi Song (Sun Yat-Sen University (CN))15/03/2024, 10:45
-
Bogdan Wiederspan (Hamburg University (DE)), Marcel Rieger (Hamburg University (DE))15/03/2024, 10:50
-
Kalana Wijethunga (University of Moratuwa (LK))15/03/2024, 10:55
-
Benedikt Hegner (CERN)15/03/2024, 11:30Plenary
-
Lukas Alexander Heinrich (Technische Universitat Munchen (DE))15/03/2024, 11:50Plenary
-
Chiara Signorile15/03/2024, 12:10Plenary
-
David Britton (University of Glasgow (GB)), Jennifer Ngadiuba (FNAL), Dr Jerome LAURET (Brookhaven National Laboratory)15/03/2024, 12:30Plenary
-
Carter M. Gustin (Tufts University)Track 3: Computations in Theoretical Physics: Techniques and MethodsOral
Quantum simulation of quantum field theories offers a new way to investigate properties of the fundamental constituents of matter. We develop quantum simulation algorithms based on the light-front formulation of relativistic field theories. The process of quantizing the system in light-cone coordinates will be explained for a Hamiltonian formulation, which becomes block diagonal, each block...
Go to contribution page -
Plenary
-
Plenary
-
Plenary
Choose timezone
Your profile timezone: