25–29 May 2026
Chulalongkorn University
Asia/Bangkok timezone

Contribution List

625 out of 625 displayed
Export to PDF
  1. Phat Srimanobhas (Chulalongkorn University (TH))
    25/05/2026, 09:00
  2. Simone Campana (CERN)
    25/05/2026, 09:40
    Track 7 - Computing infrastructure and sustainability
    Plenary Presentation

    The high-energy physics (HEP) community is preparing to address the computing challenges of the coming decade. The upgrade program of the Large Hadron Collider at CERN (HL-LHC) will generate an unprecedented volume and complexity of data, requiring advanced solutions for processing, analysis, archiving, and simulation. In parallel, other HEP experiments, such as DUNE, will enter their...

    Go to contribution page
  3. Zoe Holmes (EPFL)
    25/05/2026, 10:05
    Track 7 - Computing infrastructure and sustainability
    Plenary Presentation

    Quantum hardware has made striking progress, and I will open with a brief theorist’s snapshot of where today’s devices stand: what current qubit platforms can do reliably and what the roadmaps of leading providers suggest for the next few years. The central theme of the talk, however, is the field’s biggest open challenge: finding compelling uses—problems where quantum devices can produce real...

    Go to contribution page
  4. 25/05/2026, 10:30
  5. 25/05/2026, 11:10
  6. Dr Krich Nasingkun (Thailand Supercomputer Center (ThaiSC))
    25/05/2026, 11:30
    Track 7 - Computing infrastructure and sustainability
    Plenary Presentation

    High Performance Computing (HPC) has long been a cornerstone of large-scale scientific discovery. Today, its role is evolving beyond traditional simulation-driven workloads toward a broader paradigm that integrates data-intensive computing and also artificial intelligence, particularly large language models (LLMs). This transformation is reshaping how HPC systems are designed and...

    Go to contribution page
  7. Shinsuke Ota (RCNP, Osaka University)
    25/05/2026, 12:00
    Track 2 - Online and real-time computing
    Plenary Presentation

    The rapid evolution of detector technologies and increasing beam intensities in nuclear physics experiments are driving a paradigm shift in data acquisition (DAQ) systems, from conventional trigger-based schemes to streaming-readout architectures. Challenges associated with trigger generation in complex detector systems, as well as the growing data throughput and trigger rates, are becoming...

    Go to contribution page
  8. Gabriele Cimador (CERN, Università and INFN Torino)
    25/05/2026, 13:45
    Track 6 - Software environment and maintainability
    Oral Presentation

    The ALICE GPU TPC reconstruction is implemented by many GPU functions (kernels). Each kernel requires a block and a grid size to control GPU thread spawning, and may also need additional parameters like memory buffer sizes or pre-processing flags. Moreover, ALICE undertakes an aggressive GPU optimization by mapping grid and block sizes to launch bounds, optional compiler hints affecting...

    Go to contribution page
  9. Eric Vaandering (Fermi National Accelerator Lab. (US))
    25/05/2026, 13:45
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    In 2025, Fermilab transitioned from its legacy tape storage management software, Enstore, to CTA (CERN Tape Archive).

    The replacement system was adapted to satisfy Fermilab use cases, including the ability to read existing data off of Enstore formatted tapes. The new system also includes the ability to read aggregated files from containers, which were managed by Enstore, to maintain good...

    Go to contribution page
  10. Caterina Doglioni (The University of Manchester (GB))
    25/05/2026, 13:45
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The International Committee for Future Accelerators (ICFA) has mandated a panel to address various aspects of the data lifecycle with a focus on open science and FAIR practices - FAIR standing for Findability, Accessibility, Interoperability and Reusability of digital assets. A key indicator of success in this context is the long-term usability of research data by members of experimental...

    Go to contribution page
  11. Firdaus Soberi (The University of Edinburgh (GB))
    25/05/2026, 13:45
    Track 5 - Event generation and simulation
    Oral Presentation

    The ATLAS experiment at the Large Hadron Collider uses the Geant4 toolkit to simulate detailed Monte Carlo events spanning a broad range of physics processes. However, the full simulation is computationally expensive, with the main bottleneck originating from the modelling of particle showers in the calorimeter systems. To meet increasing demands, especially for the high-luminosity LHC era,...

    Go to contribution page
  12. Mr Andrea Chierici (Universita e INFN, Bologna (IT))
    25/05/2026, 13:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    CNAF is the national center of INFN (Italian Institute for Nuclear Physics) dedicated to Research and Development in Information and Communication Technologies. As the central computing facility of INFN, CNAF has been historically involved in the management and evolution of the most important information and data transmission services in Italy, supporting INFN activities at both national and...

    Go to contribution page
  13. Stephan Hageboeck (CERN)
    25/05/2026, 13:45
    Track 9 - Analysis software and workflows
    Oral Presentation

    ROOT's RDataFrame is a declarative analysis interface to define modern analysis workflows in C++ or Python, which are executed efficiently either locally using TBB, or in a distributed manner using Dask or Spark. Its seamless integration with TTree and RNTuple makes it an ideal tool for performant and space-efficient data analysis in HEP. This contribution will highlight recent and upcoming...

    Go to contribution page
  14. Martin Beyer
    25/05/2026, 13:45
    Track 3 - Offline data processing
    Oral Presentation

    The Compressed Baryonic Matter experiment (CBM) at FAIR is designed to explore the QCD phase diagram at high baryon densities with interaction rates up to 10 MHz using triggerless free-streaming data acquisition. The CBM Ring Imaging Cherenkov detector (RICH) contributes to the overall PID by identification of electrons from the lowest momenta up to 6-8 GeV/c, with a pion suppression factor of...

    Go to contribution page
  15. Tomas Lindén (Helsinki Institute of Physics (FI))
    25/05/2026, 13:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    HPC-resources are important for LHC HEP-experiments currently and they will become even more important as more new CPU-resources are found in HPC-machines and even more computing resources are needed for the High-Luminosity LHC (HL-LHC) era. HPC-resources can be challenging to adapt to HEP workflows. It would be preferable to have an established method of HPC enabling using standard open...

    Go to contribution page
  16. Stefanie Morgenstern (Heidelberg University (DE))
    25/05/2026, 13:45
    Track 2 - Online and real-time computing
    Oral Presentation

    The ATLAS experiment in the LHC Run 3 uses a two-level trigger system to select
    events of interest to reduce the 40 MHz bunch crossing rate to a recorded rate
    of up to 3 kHz of fully-built physics events. The trigger system is composed of
    a hardware based Level-1 trigger and a software based High Level Trigger.
    The selection of events by the High Level Trigger is based on a wide variety...

    Go to contribution page
  17. Berk Balci (CERN)
    25/05/2026, 13:45
    Track 4 - Distributed computing
    Oral Presentation

    Identity and Access Management (IAM) in a large scale research collaboration typically serves both organisational and distributed community needs. CERN operates at this intersection, balancing local institutional requirements with those of a worldwide ecosystem of scientific partners.

    This presentation will outline the evolution of CERN’s Single Sign-On platform (based on Keycloak) and the...

    Go to contribution page
  18. Iason Krommydas (Rice University (US))
    25/05/2026, 14:03
    Track 9 - Analysis software and workflows
    Oral Presentation

    The Coffea (Columnar Object Framework for Effective Analysis) framework continues to evolve as a cornerstone tool for high-energy physics data analysis, providing physicists with efficient, scalable solutions for processing complex event data. This talk presents the current status of Coffea, highlighting...

    Go to contribution page
  19. David Crooks, Dr David Crooks (UKRI STFC)
    25/05/2026, 14:03
    Track 4 - Distributed computing
    Oral Presentation

    The risk of cyber attack against members of the research and education sector remains persistently high, with several recent high visibility incidents including a well-reported ransomware attack against the British Library. As reported previously, we must work collaboratively to defend our community against such attacks, notably through the active use of threat intelligence shared with trusted...

    Go to contribution page
  20. Patin Inkaew (Helsinki Institute of Physics (FI))
    25/05/2026, 14:03
    Track 2 - Online and real-time computing
    Oral Presentation

    Pioneered by CMS in Run 1, the “data scouting” technique has helped found a now-established trend in the LHC experiments. Implemented during Run 2, LHCb and ATLAS collaborations have “turbo” and “trigger-level analysis” streams, respectively.

    The “data scouting” technique overcomes the limitations of the conventional data processing strategies with nonstandard uses of trigger and data...

    Go to contribution page
  21. Pavel Weber (Karlsruhe Institute of Technology (KIT))
    25/05/2026, 14:03
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The EGI Helpdesk, also known as Global Grid User Support (GGUS), is operated within the EGI federation as a core support service for the Worldwide LHC Computing Grid (WLCG) and other distributed research infrastructures, providing coordinated incident handling and service support across hundreds of computing centres. To address growing scalability, interoperability, and sustainability...

    Go to contribution page
  22. Hugo Gonzalez Labrador (CERN)
    25/05/2026, 14:03
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The open-sharing and re-use of scientific data is ever more important, either to meet the demands of transparency and reproducibility, or to maximize the scientific return of large and small experiments. The FAIR principles (Findable, Accessible, Interoperable, Re-usable) require efficient data publication, discovery, and long-term preservation that often means costly duplication of data...

    Go to contribution page
  23. Oz Amram (Fermi National Accelerator Lab. (US))
    25/05/2026, 14:03
    Track 5 - Event generation and simulation
    Oral Presentation

    In the upcoming High Luminosity LHC era, detector simulation will face computing resource constraints; at the same time CMS will be upgraded with the new High Granularity Calorimeter (HGCal), which is more intensive to simulate. This computing challenge motivates the use of generative machine learning models as surrogates to replace full physics-based simulation of particle showers in the...

    Go to contribution page
  24. Pablo Llopis Sanmillan (EPFL), Ms Rohini Joshi (FHNW)
    25/05/2026, 14:03
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The SKA SRCNet project will provide a globally distributed network of compute resources to enable scientific analysis of the vast data volumes produced by the Square Kilometre Array. These resources are contributed by institutions across multiple countries and are therefore highly heterogeneous, creating challenges in defining consistent compute pledges, accounting, and fair resource usage...

    Go to contribution page
  25. Dr Andrea Bocci (CERN)
    25/05/2026, 14:03
    Track 6 - Software environment and maintainability
    Oral Presentation

    The rapid evolution of computing architectures toward increasing heterogeneity — combining multi-core CPUs with accelerators from multiple vendors — poses major challenges for performance, portability, and long-term sustainability of high-energy physics (HEP) software. Maintaining separate implementations for each architecture is costly, error-prone, and difficult to scale as both hardware and...

    Go to contribution page
  26. Fernando Harald Barreiro Megino (University of Texas at Arlington), Misha Borodin (University of Texas at Arlington (US))
    25/05/2026, 14:03
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    In the current ATLAS Distributed Computing model, available disk capacity is insufficient to store even a single complete copy of all data actively in use. Consequently, tape systems serve not only as long-term backups but also as primary data sources. Efficient utilization of tapes at the ATLAS scale requires specialized orchestration mechanisms, as tape access is inherently slower and...

    Go to contribution page
  27. Wahid Redjeb (CERN)
    25/05/2026, 14:03
    Track 3 - Offline data processing
    Oral Presentation

    The increase in luminosity and pileup at the High-Luminosity LHC (HL-LHC) will place unprecedented demands on the CMS experiment, requiring major advances in both detector technology and event reconstruction. Among the planned upgrades, the High-Granularity Calorimeter (HGCAL) will replace the current endcap calorimeters, providing fine spatial segmentation and precision timing. These features...

    Go to contribution page
  28. Anastasiia Petrovych (CERN)
    25/05/2026, 14:21
    Track 2 - Online and real-time computing
    Oral Presentation

    Machine learning models used in real-time and resource-constrained environments, such as hardware triggers, online reconstruction pipelines, and FPGA/GPU inference systems, must satisfy strict latency, memory, and numerical precision requirements. Achieving these targets typically requires extensive tuning of training schedules, quantization settings, sparsity levels, and architectural...

    Go to contribution page
  29. Xin Zhao (Brookhaven National Laboratory (US))
    25/05/2026, 14:21
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The High Luminosity upgrade to the LHC (HL-LHC) is expected to generate scientific data on the scale of the multiple exabytes. To address this unprecedented data storage challenge, the ATLAS experiment launched the Data Carousel project in 2018, which entered production in 2020. In the Data Carousel workflow, jobs receive input data from tapes seamlessly for user payloads. It represents a...

    Go to contribution page
  30. Tim Voigtlaender (KIT - Karlsruhe Institute of Technology (DE))
    25/05/2026, 14:21
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    High Energy Particle Physics (HEP) relies on efficient and sustainable computing infrastructures operating at a global scale. These infrastructures must support a broad range of workloads, including machine learning applications, large-scale production campaigns, and heterogeneous end-user analysis jobs. Ensuring that available computing resources can be effectively utilized across this...

    Go to contribution page
  31. ZHIHAO LI (Institute of High Energy Physics, Chinese Academy of Sciences)
    25/05/2026, 14:21
    Track 5 - Event generation and simulation
    Oral Presentation

    The CEPC is a proposed high luminosity e+e− collider designed for precision measurements of the Higgs, W, and Z bosons. Its reference detector incorporates a long bar crystal ECAL, which employs long, narrow crystal bars arranged in orthogonal layers to deliver fine 3D shower imaging and excellent compatibility with Particle Flow reconstruction. [1]

    For CEPC physics analyses, large volumes...

    Go to contribution page
  32. Jack Charlie Munday, Ricardo Rocha (CERN)
    25/05/2026, 14:21
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Over the past few years, CERN has transitioned a significant portion of its IT services and workloads to cloud-native environments, hosted on the CERN Kubernetes Service. These workloads leverage affinity policies within the cluster to optimize availability by distributing replicas across multiple availability zones within a single data center.

    This session will present recent advancements...

    Go to contribution page
  33. Jose Daniel Gaytan Villarreal (Carnegie-Mellon University (US))
    25/05/2026, 14:21
    Track 3 - Offline data processing
    Oral Presentation

    We present the first application of a one-pass, machine learning based imaging calorimeter reconstruction approach to the latest full CMS High Granularity Calorimeter (HGCAL) simulation. The model is a Graph Neural Network that directly processes the hits in the HGCAL, one of the most important upgrades of the Compact Muon Solenoid detector in preparation for the High-Luminosity phase of the...

    Go to contribution page
  34. Francesco Giacomini (INFN CNAF)
    25/05/2026, 14:21
    Track 4 - Distributed computing
    Oral Presentation

    INDIGO IAM is an Identity and Access Management service providing authentication and authorization across distributed research infrastructures. It is a Spring Boot application relying on OAuth/OpenID Connect (OIDC) technologies and is currently evolving to meet increasingly stringent requirements in terms of security, interoperability and observability.
    A key aspect is the progressive...

    Go to contribution page
  35. Cedric Verstege (KIT - Karlsruhe Institute of Technology (DE))
    25/05/2026, 14:21
    Track 9 - Analysis software and workflows
    Oral Presentation

    Efficient and reproducible analysis workflows are vital for large-scale Monte Carlo (MC) event studies in high-energy physics (HEP). We present MC-Run, a lightweight and scalable open-source tool designed to orchestrate complete MC production and analysis chains, from event generation to Rivet analyses and subsequent post-processing such as combination procedures and plotting. The framework is...

    Go to contribution page
  36. Sylvain Caillou (Centre National de la Recherche Scientifique (FR))
    25/05/2026, 14:21
    Track 6 - Software environment and maintainability
    Oral Presentation

    In recent years, numerous Machine Learning–based algorithms have been developed within particle physics experiments to accelerate the reconstruction of complex detector objects, notably at CERN in the context of the HL-LHC and, for example, within the Belle II experiment. A significant fraction of these approaches relies on Deep Geometric Learning, and in particular on Graph Neural Networks...

    Go to contribution page
  37. Pablo Saiz (CERN)
    25/05/2026, 14:21
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The CERN Open Data portal provides open access to high-energy physics data collected by CERN experiments for research, education, and outreach. At present, more than 5 PB of data are accessible through it. To ensure the long-term preservation and sustainable management of large datasets, a cold storage system has been introduced. Cold storage enables the archiving of data that is rarely...

    Go to contribution page
  38. Giovanni Guerrieri (CERN)
    25/05/2026, 14:39
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The ATLAS Collaboration has for the first time released a large volume of event generator output in HepMC format for the benefit of the research community, allowing theorists and other experimentalists to profit from the efforts and resources of the collaboration. This release complements the existing proton and heavy ion collision data and MC simulation that were released for research use in...

    Go to contribution page
  39. Dimitrios Danopoulos (CERN)
    25/05/2026, 14:39
    Track 2 - Online and real-time computing
    Oral Presentation

    Real-time inference with sub-microsecond latency is critical for the Level-1 trigger systems at the High-Luminosity LHC. We present an end-to-end, open-source framework that spans model optimization, quantization, and FPGA deployment, enabling the translation of high-level neural network or generic dataflow models into resource-efficient FPGA implementations.

    Within the workflow, we...

    Go to contribution page
  40. Ozgur Ozan Kilic (Brookhaven National Laboratory), Tianle Wang (Brookhaven National Lab)
    25/05/2026, 14:39
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Scientific workflows are increasingly important in driving scientific discoveries, and future supercomputers must be designed and tuned to execute them efficiently. However, evaluating the performance of emerging computing systems using production-scale workflows is costly and energy-inefficient, especially at extreme scales. Moreover, application-level mini-apps do not capture workflows’...

    Go to contribution page
  41. Paolo Mastrandrea (Universita & INFN Pisa (IT))
    25/05/2026, 14:39
    Track 9 - Analysis software and workflows
    Oral Presentation

    The software toolbox used for big data analysis is rapidly changing in the last years. The adoption of software design approaches able to exploit the new hardware architectures and increase code expressiveness plays a pivotal role in boosting both development and performance of sustainable data analysis.

    The scientific collaborations in the field of High Energy Physics (e.g. the LHC...

    Go to contribution page
  42. Dr Guang Zhao (Institute of High Energy Physics (CAS))
    25/05/2026, 14:39
    Track 3 - Offline data processing
    Oral Presentation

    Particle identification (PID) is essential for future particle physics experiments such as the Circular Electron-Positron Collider and the Future Circular Collider. A high-granularity Time Projection Chamber (TPC) not only provides precise tracking but also enables dN/dx measurements for PID. The dN/dx method estimates the number of primary ionization electrons, offering significant...

    Go to contribution page
  43. Chaoqi Guo
    25/05/2026, 14:39
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Due to procurement at different stages, the computing infrastructure at the IHEP site is highly heterogeneous: the cluster contains multiple node models with varying capabilities, and the performance gap between nodes can be substantial. Traditional scheduling policies do not tightly couple hardware performance characteristics with job behavioral characteristics, which can lead to suboptimal...

    Go to contribution page
  44. Philippe Canal (Fermi National Accelerator Lab. (US))
    25/05/2026, 14:39
    Track 6 - Software environment and maintainability
    Oral Presentation

    In 2023, DUNE began re-evaluating the requirements of its data-processing framework, which led to commissioning a new design that would better fit neutrino physics than the existing reconstruction frameworks designed for collider physics. Due to the radical changes expected, significant multi-institutional effort has been directed toward the creation of the Phlex framework. In addition, the...

    Go to contribution page
  45. Alice-Florenta Suiu (National University of Science and Technology POLITEHNICA Bucharest (RO))
    25/05/2026, 14:39
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The ALICE detector at the CERN LHC generates petabyte-scale raw datasets during heavy-ion collision runs, which must undergo a multi-stage offline reconstruction cycle. EOSALICEO2 serves as the primary high-performance disk buffer for ALICE operations, both during data taking and data processing, providing the sustained throughput necessary for large-scale parallel reconstruction workflows....

    Go to contribution page
  46. Florian Ernst
    25/05/2026, 14:39
    Track 5 - Event generation and simulation
    Oral Presentation

    Accurate modelling of electromagnetic and hadronic showers is one of the most expensive components of the ATLAS detector simulation. To reduce CPU usage for Run 3, the collaboration introduced AtlFast3, a fast simulation tool which combines classical histogram based parameterisations with GAN based calorimeter models.

    Following Run 3, a new optimisation of the voxelisation scheme used for...

    Go to contribution page
  47. Marcus Hardt, Dr Marcus Hardt (KIT)
    25/05/2026, 14:39
    Track 4 - Distributed computing
    Oral Presentation

    Traditional SSH key-based authentication presents significant scalability
    and security challenges in modern federated research environments,
    particularly regarding key distribution, lifecycle management, and access
    revocation. This paper presents ssh-oidc, a novel approach that integrates
    OpenID Connect (OIDC) authentication with SSH certificate-based access
    control for scientific...

    Go to contribution page
  48. Mr Tom Dack (STFC UKRI)
    25/05/2026, 14:57
    Track 4 - Distributed computing
    Oral Presentation

    The migration away from using X.509 towards token-based authentication within the Worldwide LHC Computing Grid (WLCG) infrastructure has required many redesigns of the various workflows, ranging from data management through to job submission, and various activities in between. To compound the complexity of this transition, different user groups within WLCG have adopted different token use...

    Go to contribution page
  49. Abdelrahman Asem Elabd (University of Washington (US))
    25/05/2026, 14:57
    Track 5 - Event generation and simulation
    Oral Presentation

    Detector simulation and reconstruction are significant computational bottlenecks in particle physics. A state-of-the-art GenAI-based paradigm, Particle-flow Neural Assisted Simulations (PARNASSUS), has shown great promise for fast simulation in the context of CMS Open Data. Unlike conventional fast simulation models that target only simulation, PARNASSUS is an end-to-end approach that goes...

    Go to contribution page
  50. Dimitrios Danopoulos (CERN)
    25/05/2026, 14:57
    Track 2 - Online and real-time computing
    Oral Presentation

    Modern particle-physics experiments increasingly rely on machine learning (ML) to perform real-time data reduction under the extreme conditions of the High-Luminosity LHC (HL-LHC). Hardware-trigger inference must satisfy microsecond-level latency, deterministic execution, and tight on-chip memory constraints. FPGA-based deployments can meet these requirements for small, highly parallelized...

    Go to contribution page
  51. CMS Collaboration
    25/05/2026, 14:57
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The HPC systems integration programme of CMS is in continuous evolution and the experience achieved in the last few years resulted in a toolkit of technical solutions contributing to ease the process of incorporating the resources provided by a HPC center into a thoroughly distributed computing system. However, such a process still represents a real barrier to effectively benefit from the...

    Go to contribution page
  52. Dr Andrea Sciabà (CERN)
    25/05/2026, 14:57
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The world of data center technology is experiencing rapid and significant changes, due to an ever increasing demand for hardware in the AI commercial sector, with profound implications for the HEP community. More than ever, the road to HL-LHC requires the experiments to develop and implement radical changes on how they exploit computing and storage resources, to cope with much less favorable...

    Go to contribution page
  53. Pawel Kopciewicz (CERN)
    25/05/2026, 14:57
    Track 6 - Software environment and maintainability
    Oral Presentation

    This talk presents the development of an agentic chatbot for the LHCb experiment, a project realized in cooperation with ItGPT, the AI Chatbot collaboration at CERN. The assistant is intended to support learning, operations, software development, and data analysis tasks.
    The LHCb knowledge base is structured in three access tiers: public, CERN-shared, and internal. The internal knowledge...

    Go to contribution page
  54. Piet Nogga (University of Bonn (DE))
    25/05/2026, 14:57
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The LHCb collaboration is very excited to announce the official public release of the LHCb Ntupling Service: an application for on-demand production and publishing of custom LHCb open data, providing users access to both Run 1, and for the first time, Run 2 pp data collected by the LHCb experiment, amounting to roughly 7 fb−1. A key feature of this implementation is that no knowledge of the...

    Go to contribution page
  55. Ankush Reddy Kanuganti (Brookhaven National Laboratory)
    25/05/2026, 14:57
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    For 25 years, the STAR experiment at the Relativistic Heavy Ion Collider (RHIC) has accumulated a significant archive of metadata, supported by extensive web-based tools. As the collaboration transitions into a "long-term preservation phase," a key priority is ensuring sustained access to these critical web interfaces in a self-contained and maintenance-free format. Preserving essential...

    Go to contribution page
  56. Torri Jeske
    25/05/2026, 14:57
    Track 9 - Analysis software and workflows
    Oral Presentation

    Machine learning (ML) has proven to be incredibly useful in science and engineering, however, there exists a significant overhead for deployment and maintenance of ML models in real time operation. This is due to many different custom interfaces each complex facility may have, the conversions required between non standard data formats, and ML infrastructure required for continuous adaptation...

    Go to contribution page
  57. Matthieu Martin Melennec (Centre National de la Recherche Scientifique (FR))
    25/05/2026, 14:57
    Track 3 - Offline data processing
    Oral Presentation

    One of the major difficulties of particle reconstruction in calorimeters is the case of overlapping objects in the detector. This problem will become particularly concerning at the High-Luminosity LHC, where the increased luminosity will cause high levels of pile-up. High-granularity calorimeters, such as the future HGCal in the CMS endcap, allow us to perform Particle Flow (PF) reconstruction...

    Go to contribution page
  58. Hugo Gonzalez Labrador (CERN)
    25/05/2026, 16:15
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    In this contribution, we present a new Rucio-based service designed specifically to simplify data management for the Small and Medium experiments at CERN.

    Rucio has become the de-facto data management solution for major experiments in high-energy physics and related scientific domains such as astrophysics, providing a scalable, policy-driven framework for distributed data placement,...

    Go to contribution page
  59. Paloma Laguarta González (University of Barcelona (ES))
    25/05/2026, 16:15
    Track 2 - Online and real-time computing
    Oral Presentation

    The LHCb experiment operates a fully software-based trigger that must reduce the 40 MHz collision rate to an output bandwidth of around 10 GB/s, making real-time event selection a central computing challenge. Current selections in the second-level trigger (HLT2) are largely based on hand-crafted cuts, which can be difficult to optimise in high-dimensional spaces and may lack robustness against...

    Go to contribution page
  60. Eric Lancon (Brookhaven National Laboratory (US))
    25/05/2026, 16:15
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The RHIC experiments at Brookhaven National Laboratory have developed a comprehensive Data and Analysis Preservation (DAP) plan, covering PHENIX, STAR, and sPHENIX. This multi-faceted effort addresses the critical challenge of ensuring long-term accessibility of large volumes of nuclear physics data and reproducibility of analyses developed over 25 years of the RHIC program as the community...

    Go to contribution page
  61. Diogo Castro (CERN), Eduardo Rodrigues (University of Liverpool (GB))
    25/05/2026, 16:15
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The LHCb collaboration relies on powerful GPU and CPU clusters for real-time data processing, but these resources can be idle outside data-taking periods. While the trigger CPU farm has already been used for offline processing, specifically for Monte Carlo production, no efforts have been made to repurpose these resources for physics analysis, including ML training and inference.

    Through a...

    Go to contribution page
  62. Matteo Rama (INFN Pisa (IT))
    25/05/2026, 16:15
    Track 5 - Event generation and simulation
    Oral Presentation

    About 90% of the distributed computing resources available to the LHCb experiment are used for physics event simulation, and half of the corresponding CPU time is spent on the Geant4-based simulation of the calorimetric system.
    This talk presents a hybrid fast-simulation approach, implemented in the LHCb Gauss Simulation Framework, that combines the established hit-library technique with...

    Go to contribution page
  63. Jeff Templon
    25/05/2026, 16:15
    Track 4 - Distributed computing
    Oral Presentation

    This effort revisits the issue of scheduling multicore workloads on shared multipurpose, multi-user clusters. This issue was extensively studied and reported on for CHEP 2015. Since then, both the cluster-management technology and the typical grid-cluster workloads have evolved, with consequences for scheduling approaches.
    The relevant developments will be discussed, and arguments made that...

    Go to contribution page
  64. Lukas Breitwieser (CERN)
    25/05/2026, 16:15
    Track 9 - Analysis software and workflows
    Oral Presentation

    The hardware landscape in today's data centers is rapidly evolving, with access to GPUs becoming the standard rather than the exception. Currently, physics data analysis using RDataFrame is still limited to execution on multi-core CPUs and distributed systems.

    To reduce the time to results and enhance energy efficiency, we are investigating the feasibility of accelerating physics analysis...

    Go to contribution page
  65. Pol Muñoz Pastor (La Salle, Ramon Llull University (ES))
    25/05/2026, 16:15
    Track 6 - Software environment and maintainability
    Oral Presentation

    For over 20 years, the Gaudi framework has been used by major HEP experiments, including the LHCb and ATLAS experiments on the Large Hadron Collider (LHC) but also in the Future Circular Collider (FCC) studies. Testing mechanisms have been present almost from the beginning of the framework, but the number of applications and the corresponding amount of code to validate have increased...

    Go to contribution page
  66. Aashay Arora (Univ. of California San Diego (US))
    25/05/2026, 16:15
    Track 3 - Offline data processing
    Oral Presentation

    High-pileup conditions in CMS during the HL-LHC era make charged-particle tracking increasingly challenging as detector occupancy and combinatorics grow. We present a hybrid approach that exploits Line Segment Tracking (LST) objects rather than individual hits to enable the first CMS ML-based track reconstruction algorithm. The LST segments are built according to geometry- and physics-driven...

    Go to contribution page
  67. Emanuele Simili
    25/05/2026, 16:15
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    We present a pragmatic study of energy-management strategies in a WLCG Tier-2 environment. Building on prior node-level benchmarking (HS23/Watt) and IPMI-based telemetry, we deployed coordinated CPU frequency modulation across the few hundred physical servers at ScotGrid Glasgow and measured cluster-level effects under controlled operating conditions.
    Scaling CPU frequency to a mid-range...

    Go to contribution page
  68. Xiangyang Ju (Lawrence Berkeley National Lab. (US))
    25/05/2026, 16:33
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Machine learning (ML) models are increasingly central to High Energy Physics (HEP) workflows, spanning simulation, reconstruction, and analysis. In parallel, large language models (LLMs) are being adopted for documentation, software development, and workflow orchestration. While training typically relies on institution-specific resources, production deployment of these models poses a growing...

    Go to contribution page
  69. Ianna Osborne (Princeton University)
    25/05/2026, 16:33
    Track 9 - Analysis software and workflows
    Oral Presentation

    The computational demands of the High-Luminosity LHC (HL-LHC) necessitate a transition toward heterogeneous computing environments. While the Scikit-HEP ecosystem has historically leveraged NVIDIA GPUs through CUDA, the increasing deployment of AMD-based supercomputers requires a vendor-neutral approach to performance portability.

    This contribution details the design and implementation of...

    Go to contribution page
  70. Eric Bonfillou (CERN), Markus Schulz (CERN), Wayne Salter (CERN)
    25/05/2026, 16:33
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Environmental Sustainability of computing has garnered public attention. CERN IT is taking an active role to minimise its environmental impact. This contribution will describe how CERN IT assess its carbon footprint, reduces the impact through improvements of the infrastructure, conscious purchasing and lifecycle management. It will also cover the impact of the the improvement of the...

    Go to contribution page
  71. Davide Fuligno (University of Pisa and INFN Trieste (IT))
    25/05/2026, 16:33
    Track 5 - Event generation and simulation
    Oral Presentation

    End-to-End Fast Simulation of the ALICE Zero Degree Calorimeter using Generative Models

    Davide Fuligno
    On behalf of the ALICE Collaboration
    Università di Pisa and INFN, Trieste Italy

    The ALICE experiment at the LHC faces unprecedented computing challenges in Run 3 and 4, necessitating innovative solutions to cope with the increased data-taking luminosity and the continuous...

    Go to contribution page
  72. Carlo Varni (AGH University of Krakow (PL)), Krzysztof Cieśla (AGH University of Krakow (PL)), Marcin Wolter (Polish Academy of Sciences (PL)), Tomasz Bold (AGH University of Krakow (PL))
    25/05/2026, 16:33
    Track 3 - Offline data processing
    Oral Presentation

    Reconstructing charged-particle tracks in silicon detectors is one of the most computationally demanding tasks in high-energy physics. When applied in online event selection systems, additional latency constraints make the problem even more challenging. Within the reconstruction chain, the efficient and high-purity formation of track candidates plays a critical role in the overall...

    Go to contribution page
  73. Tai Sakuma (Princeton University), Tai Sakuma (Princeton University)
    25/05/2026, 16:33
    Track 6 - Software environment and maintainability
    Oral Presentation

    Hypothesis-awkward is a collection of Hypothesis strategies for Awkward Array. Awkward Array can represent a wide variety of layouts of nested, variable-length, mixed-type data that are common in HEP and other fields. Many tools that process Awkward Array are widely used and actively developed. Unit test cases of these tools often explicitly list many input samples in attempting to cover...

    Go to contribution page
  74. Dijana Vrbanec
    25/05/2026, 16:33
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The interTwin project, funded by Horizon Europe, developed a Digital Twin Engine (DTE), a platform for the development and running of Digital Twins across multiple scientific domains. A central component of the DTE is the interTwin Data Lake, a federated storage layer that integrates HPC, HTC, and cloud-based datasets and provides unified access while preserving site-local policies and...

    Go to contribution page
  75. Maria-Elena Mihailescu (National University of Science and Technology POLITEHNICA Bucharest (RO))
    25/05/2026, 16:33
    Track 4 - Distributed computing
    Oral Presentation

    Authors: Maria-Elena Mihăilescu (National University of Science and Technology Politehnica Bucharest, maria.mihailescu@upb.ro), Costin Grigoraș (CERN, costin.grigoras@cern.ch), Latchezar Betev (CERN, latchezar.betev@cern.ch), Mihai Carabaș (National University of Science and Technology Politehnica Bucharest, mihai.carabas@upb.ro)
    on behalf of the ALICE Collaboration

    JAliEn functions as...

    Go to contribution page
  76. Christopher Edward Brown (CERN)
    25/05/2026, 16:33
    Track 2 - Online and real-time computing
    Oral Presentation

    Machine Learning (ML) algorithms are becoming a key tool in fast decision making in high energy physics experiments from event-level classifiers in FPGA-based triggers down to cluster identification on detector module ASICs. Operating so close to raw detector data exposes these models to evolving experimental conditions that can introduce distribution shifts and degrade their performance....

    Go to contribution page
  77. Dr Mindaugas Sarpis (Vilnius University (LT))
    25/05/2026, 16:33
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    Reproducibility has become a cornerstone of modern particle physics analysis, ensuring that scientific results can be validated, extended, and reinterpreted by the broader community. Building on previous work on analysis modularization and workflow management, this contribution presents practical experiences in achieving full reproducibility for physics analyses at the LHCb experiment. We...

    Go to contribution page
  78. Esteban Rangel
    25/05/2026, 16:51
    Track 6 - Software environment and maintainability
    Oral Presentation

    Reliable floating-point behavior is increasingly difficult to ensure as HEP applications adopt heterogeneous architectures, multiple GPU vendors, and aggressive compiler optimizations such as fast-math. We introduce a non-intrusive workflow that enables detailed floating-point error analysis of GPU kernels without modifying application code. The method records SYCL kernel executions on Intel...

    Go to contribution page
  79. James Collinson (SKAO)
    25/05/2026, 16:51
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The Square Kilometre Array (SKA) telescopes, currently under construction in South Africa and Australia, are due to enter Science Verification at the end of 2026. From this point, these interferometers will generate an increasing volume of data, with the science data processors eventually producing of order 1 PB per day of science-ready data products. Managing this archive across the globally...

    Go to contribution page
  80. Mattias Wadenstein (University of Umeå (SE))
    25/05/2026, 16:51
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Reusing heat from computers has the potential of reducing the environmental
    impact of scientific computing in cold places with low carbon electricity.

    In previous work, we have done a lifecycle analysis of carbon emissions
    from scientific computing[1] and in this work we included a simplistic
    model for how heat reuse in northern Sweden could affect the total carbon
    footprint of WLCG...

    Go to contribution page
  81. CMS Collaboration
    25/05/2026, 16:51
    Track 4 - Distributed computing
    Oral Presentation

    The resource landscape available to LHC experiments is evolving, driven by industry trends and funding-agencies policies, from traditional WLCG sites dominated by x86 CPU resources, towards larger consolidated facilities, with a growing fraction of supercomputing centers, and a rising degree of hardware heterogeneity. The CMS experiment, which has already demonstrated substantial throughput...

    Go to contribution page
  82. Marilena Bandieramonte (University of Pittsburgh (US))
    25/05/2026, 16:51
    Track 3 - Offline data processing
    Oral Presentation

    In response to the rising computational and storage demands of the High-Luminosity Large Hadron Collider (HL-LHC), efforts are underway to boost the processing efficiency of ATLAS Inner Detector (ID) event reconstruction. Our strategy to reduce the computational demands employs a Track-Overlay approach, which uses pre-reconstructed pile-up tracks (from separate minimum-bias simulations) and...

    Go to contribution page
  83. Mingrui Zhao (Peking University (CN))
    25/05/2026, 16:51
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    Reproducibility and transparency are increasingly critical in high-energy physics, where analyses rely on complex, evolving workflows and heterogeneous software environments. While existing initiatives such as the CERN Analysis Preservation portal and REANA provide essential infrastructure, the day-to-day management and long-term maintainability of individual analyses remain fragmented and...

    Go to contribution page
  84. Tong Liu
    25/05/2026, 16:51
    Track 5 - Event generation and simulation
    Oral Presentation

    The detailed simulation of electromagnetic calorimeters (EMC) remains computationally intensive due to simulation of millions of secondary particles.
    Machine learning offers a promising alternative by bypassing explicit shower simulation, though its accuracy must be rigorously validated.

    In this work, we develop fast simulation models for the BESIII EMC using generative adversarial...

    Go to contribution page
  85. Ianna Osborne (Princeton University)
    25/05/2026, 16:51
    Track 9 - Analysis software and workflows
    Oral Presentation

    The upcoming high-luminosity era at the LHC (HL-LHC) aims to produce exabyte-scale datasets that will significantly increase opportunities for new physics discoveries at the energy frontier. At the same time, future analyses will be increasingly computationally demanding. Larger datasets, increased analysis complexity, and the widespread adaption of machine learning techniques in HEP will...

    Go to contribution page
  86. Valerii Kholoimov (EPFL - Ecole Polytechnique Federale Lausanne (CH))
    25/05/2026, 16:51
    Track 2 - Online and real-time computing
    Oral Presentation

    The new fully software-based trigger of the LHCb experiment at CERN operates at a 30 MHz data rate, opening a search window into previously unexplored regions of the physics phase space. The BuSca (Buffer Scanner) project at LHCb acquires and analyses data in real time, prior to any trigger decision, extending sensitivity to new particle lifetimes and mass ranges.
    Displaced tracks that...

    Go to contribution page
  87. Dr Lubos Krcal (CERN)
    25/05/2026, 16:51
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The ALICE Event Processing Nodes (EPN) farm is a high-density GPU HPC system designed primarily for real-time reconstruction of 50 kHz Pb-Pb collisions during LHC Run 3. It is the largest computer farm at CERN in terms of compute capacity. Comprising 350 nodes and 2800 GPUs, with a peak performance of ~42 PFLOP/s single precision, the HPC infrastructure has been operated throughout Run 3 by a...

    Go to contribution page
  88. 25/05/2026, 17:09
    Track 2 - Online and real-time computing
    Oral Presentation

    The High-Luminosity LHC will generate unprecedented data rates, pushing real-time trigger systems to their limits. We present a novel approach deploying graph neural networks (GNNs) on FPGAs to achieve fast, sub-microsecond inference for Level-0 muon triggers. Exploiting the sparse, relational structure of detector hits, the method preserves key spatial correlations while enabling...

    Go to contribution page
  89. Tommaso Diotalevi (Universita e INFN, Bologna (IT))
    25/05/2026, 17:09
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    We present the development of a Virtual Research Environment (VRE) for the Einstein Telescope (ET) project, implemented within the Bologna research unit to support collaborative, high-performance, and reproducible research across the ET community. The Einstein Telescope is a next-generation underground gravitational-wave observatory designed to explore the Universe throughout its cosmic...

    Go to contribution page
  90. Nikita Shadskiy (KIT - Karlsruhe Institute of Technology (DE))
    25/05/2026, 17:09
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The upcoming High-Luminosity Large Hadron Collider (HL-LHC) era will present significant computational challenges, demanding a substantial increase in data processing for the WLCG experiments at CERN. To meet these needs the WLCG is exploring strategies for resource optimization. This includes a paradigm shift towards heterogeneous hardware, recognizing that GPUs are superior to CPUs for...

    Go to contribution page
  91. Federica Legger (Universita e INFN Torino (IT))
    25/05/2026, 17:09
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Large-scale scientific experiments, such as those in gravitational-wave (GW) science, produce extensive datasets that are often stored in isolated data lakes. The second-generation interferometers—LIGO, Virgo, and KAGRA—are part of an international scientific network, the International Gravitational-Wave Observatory Network (IGWN). A similar framework is envisaged for the third-generation...

    Go to contribution page
  92. Dr Brij Kishor Jashal (Rutherford Appleton Laboratory)
    25/05/2026, 17:09
    Track 4 - Distributed computing
    Oral Presentation

    Managing job-slot allocation in a multi-VO environment remains a persistent operational challenge for WLCG sites, particularly when each Virtual Organization (VO) employs distinct workload-management and scheduling behaviors. At the RAL Tier-1 (RAL-LCG2), more than a dozen VOs—including CMS, ATLAS, LHCb, and several smaller communities—compete for heterogeneous resources while relying on...

    Go to contribution page
  93. Cameron Harris
    25/05/2026, 17:09
    Track 9 - Analysis software and workflows
    Oral Presentation

    The FCCee b2Luigi Automated Reconstruction And Event processing (FLARE) package is an open source python based data workflow orchestration tool powered by b2luigi. FLARE automates the workflow of Monte Carlo (MC) generators inside the Key4HEP stack such as Whizard, MadGraph5_aMC@NLO, Pythia8 and Delphes. FLARE also automates the Future Circular Collider (FCC) Physics Analysis software...

    Go to contribution page
  94. Yunhe Yang (Nankai University), Xinyu Zhuang
    25/05/2026, 17:09
    Track 3 - Offline data processing
    Oral Presentation

    We present an end-to-end track reconstruction algorithm based on Graph Neural Networks (GNNs) for a 35 layers multilayer drift chamber (MDC) combined with a 3 layers cylindrical gas electron multiplier (CGEM) in the BESIII experiment at the BEPCII collider. The algorithm directly processes MDC wire measurement and CGEM cluster as input to simultaneously predict the number of track candidates...

    Go to contribution page
  95. Minh-Tuan Pham (University of Wisconsin Madison (US))
    25/05/2026, 17:09
    Track 5 - Event generation and simulation
    Oral Presentation

    Calorimeter simulation is among the most resource-hungry components of modern collider experiments such as ATLAS and CMS, currently accounting for half of the total CPU budgets at the LHC, and will only increase in the future High Luminosity phase. This exploding computing demand and the arrival of sizeable open datasets such as CaloChallenge have spurred the development of numerous...

    Go to contribution page
  96. Alex Owen (Queen Mary University of London), Dr Sudha Ahuja (Queen Mary University of London)
    25/05/2026, 17:09
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Queen Mary University of London completed a long planned [1] major refurbishment [2] of its data centre in Autumn 2024. The GridPP Tier-2 cluster is the main tenant of the datacentre which had been upgraded with heat recovery technology to improve energy efficiency whist also increasing rack capacity.

    This contribution reports on the operational experience of the facility from initial...

    Go to contribution page
  97. Scott Snyder (Brookhaven National Laboratory (US))
    25/05/2026, 17:09
    Track 6 - Software environment and maintainability
    Oral Presentation

    For the development of its offline C++ software, ATLAS uses a custom static checker. This is implemented as a gcc plugin and is automatically enabled for all gcc compilations by the ATLAS build system. This was an important tool for the multithreaded migration of the ATLAS offline code, where it was used to flag constructs which are legal C++ but not thread-friendly. Besides thread-safety, the...

    Go to contribution page
  98. Alexander Heidelbach
    25/05/2026, 17:27
    Track 9 - Analysis software and workflows
    Oral Presentation

    Workflow Management Systems (WMSs) provide essential infrastructure for organizing arbitrary sequences of tasks in a transparent, maintainable, and reproducible manner. The widely used Python-based WMS luigi enables the construction of complex workflows, offering built-in task dependency resolution, basic workflow visualization, and convenient command-line integration.
    b2luigi is an extension...

    Go to contribution page
  99. Qingbao Hu (IHEP)
    25/05/2026, 17:27
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The High Energy Photon Source (HEPS), located in Beijing, is an advanced public research facility designed to support multidisciplinary scientific innovation and high-technology development. HEPS is scheduled to complete construction and enter operation in 2026. It will deliver synchrotron radiation with high energy, high brilliance, and high coherence, achieving spatial, temporal, and energy...

    Go to contribution page
  100. Paul James Laycock (Universite de Geneve (CH))
    25/05/2026, 17:27
    Track 6 - Software environment and maintainability
    Oral Presentation

    The LIGO, Virgo, and KAGRA gravitational-wave (GW) detectors exchange and analyse data at low latency to identify GW signals and rapidly issue alerts to the astronomy community. This low-latency computing workflow comprises multiple complementary search pipelines that continuously process streaming detector data, followed by an orchestration layer that produces an optimized GW event candidate...

    Go to contribution page
  101. Janusz Malka (European XFEL GmbH)
    25/05/2026, 17:27
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    At photon-science facilities such as the European XFEL, large data volumes are generated at multiple experiment stations and under frequently changing configurations.
    The experiments that produce these data typically last only a few days and are carried out by external user teams.

    In this environment, effective management of experimental data is essential for delivering timely,...

    Go to contribution page
  102. Sakib Rahman, Sakib Rahman
    25/05/2026, 17:27
    Track 5 - Event generation and simulation
    Oral Presentation

    The ePIC Physics and Detector Simulations leverage the Geant 4 and DD4hep software frameworks, which serves as a single source of truth for detector description, ensuring consistent configuration across full (Geant 4/DDG4) and accelerated simulation models. As simulation complexity scales, we employed a systematic profiling methodology using the DD4hep plugin mechanism to pinpoint...

    Go to contribution page
  103. Dr Brij Kishor Jashal (Rutherford Appleton Laboratory)
    25/05/2026, 17:27
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The traditional WLCG computing model has been optimised for high-throughput processing of large numbers of small, independent pp-collision event workloads. This CPU-centric paradigm matched naturally with homogeneous multi-core nodes, where resources could be presented as uniform job slots to Grid middleware. As WLCG sites increasingly deploy modern GPUs, and HEP generator, simulation, and...

    Go to contribution page
  104. Benjamin Gutierrez (Argonne National Laboratory), Doug Benjamin (Brookhaven National Laboratory (US)), Douglas Benjamin
    25/05/2026, 17:27
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Rucio, the scientific data management system developed by ATLAS at CERN, has become widely adopted across high-energy physics experiments for managing distributed datasets at exabyte scale. Traditionally, Rucio relies on the WLCG File Transfer Service (FTS) for data movement between storage elements. We present recent developments enabling Globus—the research cyberinfrastructure platform...

    Go to contribution page
  105. Mehrnoosh Moallemi (Science and Technology Facilities Council STFC (GB))
    25/05/2026, 17:27
    Track 2 - Online and real-time computing
    Oral Presentation

    Anomaly detection at the LHC aims to identify events that deviate from dominant Standard Model (SM) processes while minimizing assumptions inherent to predefined trigger selections, enabling model-agnostic searches for new physics. The CMS experiment employs a two-stage trigger system that reduces the LHC bunch-crossing rate of up to 40 MHz to an output rate of approximately 9 kHz for offline...

    Go to contribution page
  106. 兆轲 张 (中国科学院高能物理研究所)
    25/05/2026, 17:27
    Track 3 - Offline data processing
    Oral Presentation

    The COMET experiment is designed to search for charged lepton flavor violation (CLFV) through coherent muon-to-electron conversion, characterized by a 105 MeV electron signal. In Phase I, an all‑stereo‑layer Cylindrical Drift Chamber (CDC) is used as the main tracker for charged‑particle measurement. A key challenge is that all the signal tracks are curled and about one‑third of the tracks in...

    Go to contribution page
  107. Marta Bertran Ferrer (CERN)
    25/05/2026, 17:27
    Track 4 - Distributed computing
    Oral Presentation

    ALICE Grid sites employ heterogeneous resource allocation policies, where each configuration is tailored to the specific conditions of the sites, their user communities, and local scheduling preferences. The design and implementation of JAliEn have been specifically developed to be flexible and adaptable to these varied configurations and execution systems, allowing it to utilize the allocated...

    Go to contribution page
  108. Daniele Spiga, Diego Ciangottini (INFN, Perugia (IT)), Giulio Bianchini (Universita e INFN, Perugia (IT)), Lucio Anderlini (Universita e INFN, Firenze (IT)), Massimo Sgaravatto (Universita e INFN, Padova (IT)), Mauro Gattari (INFN (National Institute for Nuclear Physics)), Mirko Mariotti (Universita e INFN, Perugia (IT)), Rosa Petrini (Universita e INFN, Firenze (IT))
    25/05/2026, 17:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The development of the ecosystems for high energy physics analysis is experiencing a strong push towards the exploration of cloud-native frameworks, especially for what is considered the most interactive and plotting based “last-mile”. Along with the increasing adoption and R&D around ML-based algorithm, these are opening a request for ways to extend a Kubernetes cluster over a range of...

    Go to contribution page
  109. CMS Collaboration
    25/05/2026, 17:45
    Track 4 - Distributed computing
    Oral Presentation

    Efficient use of distributed computing resources is essential for sustaining the growing processing demands of the CMS experiment. Building on our previous work to assess and minimize unused CPU cycles, new advances in scheduling strategies that further improve resource utilization are being developed for the CMS Global Pool.

    The CMS Submission Infrastructure team is deploying enhanced...

    Go to contribution page
  110. Andreea-Irina Hedes (The University of Manchester (GB))
    25/05/2026, 17:45
    Track 2 - Online and real-time computing
    Oral Presentation

    The first level of the LHCb experiment’s trigger system (HLT1) performs real-time reconstruction and selection of events at the LHC bunch crossing rate using GPUs. It must balance the diverse goals of the LHCb physics programme, which spans from kaon physics to the electroweak scale.

    To maximise the physics output across the entirety of LHCb's physics programme, an automated bandwidth...

    Go to contribution page
  111. CMS Collaboration
    25/05/2026, 17:45
    Track 6 - Software environment and maintainability
    Oral Presentation

    A major constraint on CMS production jobs is the amount of memory they require. CMS needs the ability to monitor and investigate memory usage in its software. General-purpose memory profilers often significantly slow down the monitored application, and require lots of additional memory. For many cases, the detailed information about memory allocations and deallocations recorded by a...

    Go to contribution page
  112. Jay Chan (Lawrence Berkeley National Lab. (US))
    25/05/2026, 17:45
    Track 3 - Offline data processing
    Oral Presentation

    Graph Neural Networks (GNNs) are a leading approach for particle track reconstruction, typically following a three-step pipeline: graph construction, edge classification, and graph segmentation. In edge-classification pipelines like ACORN, the segmentation step is often a trade-off between the speed of local algorithms (e.g., Junction Removal) and the accuracy of global algorithms (e.g.,...

    Go to contribution page
  113. Ivana Hrivnacova (Université Paris-Saclay (FR))
    25/05/2026, 17:45
    Track 5 - Event generation and simulation
    Oral Presentation

    VecGeom is a modern C++ geometry modeling library specifically designed to accelerate particle detector simulation by leveraging Single Instruction Multiple Data (SIMD) vectorization. It offers optimized geometric primitives, developed in collaboration with the USolids project. Since Geant4 10.5, users can replace native Geant4 geometry primitives with VecGeom solids. This feature has already...

    Go to contribution page
  114. Miguel Villaplana (IFIC - Univ. of Valencia and CSIC (ES))
    25/05/2026, 17:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    AbstractMonitoring and improving the sustainability of large-scale computing infrastructures has become an increasingly important challenge in High Energy Physics. This work presents the design and implementation of a sustainability-oriented monitoring dashboard for an ATLAS Tier 2 computing centre. The dashboard integrates global site-level metrics and proposes a set of job-level metrics...

    Go to contribution page
  115. Maximilian Horzela (Georg August Universitaet Goettingen (DE))
    25/05/2026, 17:45
    Track 9 - Analysis software and workflows
    Oral Presentation

    Modern high-energy physics (HEP) analyses rely on complex, multi-stage workflows combining heterogeneous software and distributed data. While individual analysis tools are well developed, their orchestration is typically ad hoc, leading to duplicated effort, inconsistent configurations, and limited reproducibility. Existing workflow systems based on static dependency graphs struggle to capture...

    Go to contribution page
  116. James William Walder (Science and Technology Facilities Council STFC (GB))
    25/05/2026, 17:45
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The SKA Regional Centre Network (SRCNet) is a globally federated infrastructure providing data distribution and science workflows for the Square Kilometre Array (SKA). The v0.1 test campaign delivered the first system-level validation across nine accredited nodes, integrating global services (Rucio, FTS, SKA-IAM, perfSONAR) with site services (storage, compute, science platforms) and executing...

    Go to contribution page
  117. Dr Marcus Ebert (University of Victoria)
    25/05/2026, 17:45
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    BaBar stopped data taking in 2008, but its data is still analyzed by the collaboration. In 2021 a new computing system outside of the SLAC National Accelerator Laboratory was developed and major changes were needed to keep the ability to analyze the data by the collaboration, while the user facing front ends all needed to stay the same. While the new computing system has worked well since...

    Go to contribution page
  118. Rosie Bolton (SKA Observatory)
    26/05/2026, 09:00
    Track 4 - Distributed computing
    Plenary Presentation

    SRCNet — Vision, Progress, and Cross-Community Computing for the SKA Telescope

    The SKA Regional Centre Network (SRCNet) is a cornerstone of the Square Kilometre Array Observatory’s distributed science computing model, federating regional centres into a coherent global infrastructure providing user access to data, processing, and analysis.

    The SRCNet Project is an international project to...

    Go to contribution page
  119. Leanne Guy
    26/05/2026, 09:30
    Track 1 - Data and metadata organization, management and access
    Plenary Presentation

    The Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) is set to revolutionize our understanding of the Universe with groundbreaking images and scientific results. In this plenary I will highlight early discoveries and iconic images, focusing on the data management system that enables Rubin science at scale. Rubin has drawn on synergies between the high-energy physics and...

    Go to contribution page
  120. WLCG Technical Coordination Board
    26/05/2026, 10:00
    Track 4 - Distributed computing
    Plenary Presentation

    With the end of Run 3 of the LHC approaching, the Worldwide LHC Computing Grid (WLCG) is entering an important transition toward the HL-LHC era. To meet the substantial increase in data volume, computational requirements, and resource heterogeneity, while preserving reliability, sustainability, and community cohesion, we have launched the development of the WLCG Technical Roadmap 2026-2030....

    Go to contribution page
  121. Robin Hofsaess
    26/05/2026, 11:00
    Track 7 - Computing infrastructure and sustainability
    Plenary Presentation

    The transition to the HL-LHC era brings unprecedented computing demands and a rapidly shifting hardware landscape. Since its successful deployment in 2023, HEPScore has become the standard CPU benchmark for WLCG sites, replacing the legacy HEP-SPEC06. The journey to HEPScore was a major collaborative effort, involving software developers, data analysts, site personel, and the WLCG Deployment...

    Go to contribution page
  122. Maksim Melnik Storetvedt (Western Norway University of Applied Sciences (NO))
    26/05/2026, 11:30
    Track 4 - Distributed computing
    Plenary Presentation

    For decades, the x86 architecture has been the bedrock of Grid computing. That era of uniformity is over. Driven by the specialized demands of next-generation applications, we have entered a Resource Renaissance - a period defined by the rapid proliferation of ARM and RISC-V CPUs and various types and generations of GPUs across Grid computing centers.
    However, this hardware abundance carries...

    Go to contribution page
  123. Mr Julien Leduc (CERN)
    26/05/2026, 12:00
    Track 1 - Data and metadata organization, management and access
    Plenary Presentation

    During the last year of LHC Run-3, several new records were set by the CERN Tape Archive (CTA) service at WLCG Tier-0: the rate of data archival to tape peaked at over 60 PB/month and the total volume of data grew to more than 1 Exabyte.

    The CTA service was able to scale up to meet these demands thanks to architectural choices made prior to Run-3 as well as responses to specific operational...

    Go to contribution page
  124. Quinn Campagna
    26/05/2026, 13:45
    Track 6 - Software environment and maintainability
    Oral Presentation

    As the Belle II dataset grows towards a high luminosity scenario, the requirements for the distributed computing framework have grown in complexity and scale. To ensure long-term software maintainability, the Belle II Distributed Computing team is implementing a feedback-driven development model. This approach bridges the gap between the end-user experience and system evolution, aiming for...

    Go to contribution page
  125. Stella Felice Schaefer (Hamburg University (DE))
    26/05/2026, 13:45
    Track 2 - Online and real-time computing
    Oral Presentation

    At the Phase-2 Upgrade of the CMS Level-1 Trigger (L1T), particles will be reconstructed by linking charged particle tracks with clusters in the calorimeters and muon tracks from the muon station. The 200 pileup interactions will be mitigated using primary vertex reconstruction for charged particles and a weighting for neutral particles based on the distribution of energy in a small area. Jets...

    Go to contribution page
  126. Lorenzo Valentini (CERN)
    26/05/2026, 13:45
    Track 4 - Distributed computing
    Oral Presentation

    Distributed computing infrastructures that support modern large-scale scientific experiments must remain reliable, scalable, and flexible. HammerCloud (HC) provides an automated framework for continuous testing, benchmarking, and commissioning of services within the Worldwide LHC Computing Grid (WLCG), using realistic full-chain experiment workflows.

    As the technical computing environment...

    Go to contribution page
  127. Samuel Louis Bein (Northeastern University (US))
    26/05/2026, 13:45
    Track 5 - Event generation and simulation
    Oral Presentation

    As the LHC moves into its high-luminosity phase, the CMS experiment must handle increasingly complex data collected at much higher rates. To complement real data, simulated samples must also scale in volume and complexity while meeting the growing demands of the CMS physics program. Increased use of the CMS fast Monte Carlo production framework (FastSim) can help meet these demands,...

    Go to contribution page
  128. Katy Ellis (Science and Technology Facilities Council STFC (GB))
    26/05/2026, 13:45
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    For Phase-II of the Large Hadron Collider program, a dramatic increase in data quantity is expected due to increased pileup, higher experiment logging rates and a larger number of channels in the upgraded detector components. For Run-4, beginning in around 2030, and using the current computing model without software improvements, CMS estimates growth of an order of magnitude in computing...

    Go to contribution page
  129. Nicolas Poffley (CERN)
    26/05/2026, 13:45
    Track 9 - Analysis software and workflows
    Oral Presentation

    Commissioned in 2022, the organised analysis system Hyperloop has been the
    primary platform for analysis within ALICE. The system was developed to meet the demands of the upgraded ALICE detector for Run 3, where the data-taking rate capability was increased by two orders of magnitude. To support analysis on such large datasets, the ALICE distributed computing infrastructure was revised and...

    Go to contribution page
  130. Antonio Delgado Peris (CERN)
    26/05/2026, 13:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The CERN Tier-0, representing around 25% of WLCG’s total CPU capacity, currently handles 125 thousand concurrent jobs. For HL-LHC at full luminosity, we expect this number to increase by a factor between 4 and 7. Therefore, CERN's HTCondor batch system will need to manage a much larger pool of resources and many more computing tasks. This will have an impact on HTCondor's central components,...

    Go to contribution page
  131. Hannes Jakob Hansen, Paulo Guilherme Pinheiro Pereira (Universidade de Sao Paulo (USP) (BR))
    26/05/2026, 13:45
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    Machine learning challenges have proven to be powerful tools for collaboration, benchmarking and algorithmic innovation in scientific communities. Global platforms such as Kaggle enable researchers to publish datasets, submit solutions and compare performance through structured competitions. However, they assume that participants can use public datasets and external computing resources, which...

    Go to contribution page
  132. Jiarui Hu (IHEP)
    26/05/2026, 13:45
    Track 3 - Offline data processing
    Oral Presentation

    X-ray phase contrast imaging based on propagation is a crucial technique for achieving non-destructive detection at micro and nano scales. However, the recovery of phase information from intensity measurements presents a typical ill-posed inverse problem. Traditional iterative algorithms often necessitate multiple distance measurements, which increases both the complexity and time cost of...

    Go to contribution page
  133. David Schultz (University of Wisconsin-Madison)
    26/05/2026, 13:45
    Track 3 - Offline data processing
    Oral Presentation

    Re-processing data with improved detector understanding, new data processing methods, etc. is natural for any particle physics experiments over the course of its life. The IceCube Neutrino Observatory has previously re-processed its data nearly a decade ago. Now we are processing the data for the third time, which we call Pass3. With this reprocessing, we have recorded three times as much...

    Go to contribution page
  134. Marco Giacalone (CERN)
    26/05/2026, 14:03
    Track 5 - Event generation and simulation
    Oral Presentation

    The simulation of background processes in high-energy physics can be computationally expensive and time–consuming. To provide the most realistic data description at the ALICE experiment using Monte Carlo simulations, we investigated alternative solutions to generate the products of electromagnetic interactions initiated by slow neutrons in the Time Projection Chamber (TPC). Specifically,...

    Go to contribution page
  135. Alessandra Forti (The University of Manchester (GB))
    26/05/2026, 14:03
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The WLCG Data Challenge 2027 (DC27) represents a critical milestone in preparing our global distributed computing and networking infrastructure for the demands of HL-LHC and next-generation data-intensive experiments. Building on the successes and lessons learned from previous challenges, the DC27 program is driven by a coordinated series of mini-capability and mini-capacity challenges. These...

    Go to contribution page
  136. Prof. Daniel Nieto (IPARCOS-UCM)
    26/05/2026, 14:03
    Track 3 - Offline data processing
    Oral Presentation

    The Cherenkov Telescope Array Observatory (CTAO) represents the next generation of ground-based gamma-ray telescopes, designed to probe the very-high-energy (VHE) sky above 20 GeV with unprecedented sensitivity. With the first Large-Sized Telescope (LST-1) prototype already taking data on La Palma, robust software is required to accurately reconstruct the properties of primary particles (type,...

    Go to contribution page
  137. Ilija Vukotic (University of Chicago (US))
    26/05/2026, 14:03
    Track 6 - Software environment and maintainability
    Oral Presentation

    We present the development of an AI Assistant designed to support ATLAS computing operations and users at the UChicago/MWT2 facilities. A significant portion of effort in distributed computing is spent helping users, debugging systems, optimizing workflows, and maintaining a diverse ecosystem of tools and services. Modern large language models offer a practical opportunity to reduce this...

    Go to contribution page
  138. Fernando Harald Barreiro Megino (University of Texas at Arlington)
    26/05/2026, 14:03
    Track 4 - Distributed computing
    Oral Presentation

    The ATLAS experiment at the CERN Large Hadron Collider relies on a worldwide distributed computing infrastructure to process millions of production and analysis jobs daily across grid, cloud, and HPC resources. The ATLAS Distributed Computing (ADC) system integrates workload, data, and resource management services to ensure efficient use of heterogeneous environments. Within ADC, the PanDA...

    Go to contribution page
  139. Axel Naumann (CERN)
    26/05/2026, 14:03
    Track 3 - Offline data processing
    Oral Presentation

    High Energy Physics uses C++ for performance-critical, large-scale (50 million lines of code) libraries. Python is used for analysis. C++ is complex and getting more so, with industry creating a very competitive market for developers. Python is very slow but very common. Is there any way out? As part of the R&D done in the Next Generation Triggers project we are looking at novel languages that...

    Go to contribution page
  140. Luca Giommi (INFN)
    26/05/2026, 14:03
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The National Institute for Nuclear Physics (INFN) manages the INFN Cloud, a federated cloud platform providing a customizable portfolio of IaaS, PaaS, and SaaS services to meet the needs of the scientific communities it serves. PaaS services are implemented using an Infrastructure as Code approach, employing TOSCA templates, Ansible, Docker, and Helm technologies.
    The federation middleware...

    Go to contribution page
  141. Francesca Lizzi
    26/05/2026, 14:03
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    We summarize five years of experience organizing educational Hackathons within the Italian research landscape of Artificial Intelligence (AI) and High Energy Physics (HEP). These events were part of the INFN AI and ML projects, which aimed to provision GPU and other hardware accelerators via an interactive JupyterLab-based platform providing an easy and highly customizable development...

    Go to contribution page
  142. Enrico Lupi (CERN)
    26/05/2026, 14:03
    Track 2 - Online and real-time computing
    Oral Presentation

    Machine-learning algorithms are becoming central to real-time event selection at the LHC, where future trigger systems must process substantially more complex detector information at fixed, sub-microsecond latencies. These constraints create a growing need for flexible workflows that can map large neural networks onto heterogeneous trigger hardware while preserving strict timing budgets. We...

    Go to contribution page
  143. Dr Rahul Tiwary (Toshiko Yuasa Laboratory (TYL), KEK)
    26/05/2026, 14:03
    Track 9 - Analysis software and workflows
    Oral Presentation

    The Full Event Interpretation (FEI) algorithm is a central component of the Belle II analysis framework, designed for the efficient and flexible reconstruction of exclusive B-meson decays. It performs a hierarchical reconstruction of hadronic and semileptonic final states, using multivariate classification techniques to tag one of the two B mesons produced in electron–positron collisions. The...

    Go to contribution page
  144. Max Hart (University College London (GB))
    26/05/2026, 14:21
    Track 3 - Offline data processing
    Oral Presentation

    Modern collider detector experiments comprise of multiple different detector subsystems, each of which require dedicated reconstruction algorithms. Manually tuning these algorithms such that they work optimally not only in isolation, but also when combined together to form a full reconstruction chain, is a time consuming task that poses technical and organisational challenges. We demonstrate...

    Go to contribution page
  145. Alaettin Serhan Mete (Argonne National Laboratory (US))
    26/05/2026, 14:21
    Track 9 - Analysis software and workflows
    Oral Presentation

    ATLAS has developed a ROOT RNTuple prototype within its Athena software, enabling read/write support for event data and in-file metadata. Using this implementation, ATLAS converted the publicly available Open Data, comprising multiple tens of terabytes of 2015–2016 proton–proton collisions and associated Monte Carlo samples, from ROOT TTree to RNTuple in the official DAOD PHYSLITE format. The...

    Go to contribution page
  146. Andreas Joachim Peters (CERN)
    26/05/2026, 14:21
    Track 6 - Software environment and maintainability
    Oral Presentation

    Advances in AI-assisted code generation are changing how complex software systems are designed, built, and improved over time. In storage software development for scientific computing, we explore how AI-based code synthesis and refinement workflows can speed up prototyping, strengthen maintainability, and clearly express architectural intent.

    We present several practical examples developed...

    Go to contribution page
  147. Nicola Pace
    26/05/2026, 14:21
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Version 4 of the File Transfer Service (FTS4) is currently under active development within the CERN IT Storage group. This project aims to address issues which have prevented version 3 from being proposed as a candidate for automating bulk file-transfers during LHC Physics Run 4.

    FTS4 has taken an incremental rather than big-bang approach to its development. FTS4 started with the FTS3...

    Go to contribution page
  148. Lucas Astrand
    26/05/2026, 14:21
    Track 3 - Offline data processing
    Oral Presentation

    Machine-learning techniques are becoming an increasingly important part of the design and physics reach of the proposed HIBEAM/NNBAR program at the European Spallation Source. Building on our previously published ML studies for particle identification and event reconstruction, we are developing a broader suite of ML tools to support detector optimization, vertex and event reconstruction, and...

    Go to contribution page
  149. Liv Helen Vage (Princeton University (US))
    26/05/2026, 14:21
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The rapid growth of machine learning has left an overwhelming abundance of teaching resources in its wake that often makes it hard for students to know where to start, how to progress, or what sources to trust. Simultaneously, LLM-based coding assistants enable students to produce working models almost immediately — often before they understand the underlying principles or common pitfalls....

    Go to contribution page
  150. Ronald Caravaca-Mora (Consejo Nacional de Rectores (CONARE) (CR)/Universidad de Costa Rica (UCR) (CR)), Cilicia Uzziel Perez (La Salle, Ramon Llull University (ES))
    26/05/2026, 14:21
    Track 2 - Online and real-time computing
    Oral Presentation

    Graph-based reconstruction methods are well-suited to the sparse and irregular geometry of modern calorimeters, but their deployment often depends on achieving low and predictable inference latency across heterogeneous computing environments. We evaluate GarNet, a lightweight Graph Neural Network (GNN) for calorimeter energy reconstruction, focusing on its cross-backend performance using...

    Go to contribution page
  151. Dr Jonathan Woithe (Adelaide University (AU))
    26/05/2026, 14:21
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    AU-Melbourne is the first grid computing site to be implemented entirely in the cloud. Virtual machines are managed with [OpenStack][1] and [Cloud Scheduler v2][2] (CSv2) while an S3 object store functions as the storage backend. The site has been in operation for over 12 months, providing Compute Element (CE) and Storage Element (SE) resources for the ATLAS and Belle II experiments. ...

    Go to contribution page
  152. Prof. Matteo Franchini (University of Bologna and INFN), Matteo Franchini (University of Bologna and INFN (IT))
    26/05/2026, 14:21
    Track 5 - Event generation and simulation
    Oral Presentation

    The increasing demands on simulation statistics for HL-LHC analyses challenge the scalability of traditional calorimeter simulation across all LHC collaborations. While machine learning based fast simulation techniques have demonstrated strong performance, future collider experiments will require generative models that are not only accurate and fast, but also scalable and interpretable in...

    Go to contribution page
  153. Sakib Rahman, Sakib Rahman
    26/05/2026, 14:21
    Track 4 - Distributed computing
    Oral Presentation

    The ePIC experiment at the upcoming Electron-Ion Collider (EIC) continues to expand its simulation production capabilities on the Open Science Grid (OSG) infrastructure. We report on three significant developments since our previous work: the integration of background processes into simulation production, comprehensive testing of the PanDA workload management system, and progress in Rucio...

    Go to contribution page
  154. Aashay Arora (Univ. of California San Diego (US))
    26/05/2026, 14:39
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Given the increased amount of data expected during the HL-LHC and the escalation of data transfers that this implies, it becomes of paramount importance to have control over the available network bandwidth and the ability to allocate this bandwidth for high-priority and time sensitive data flows.

    The Rucio/SENSE integration project intends to provide Rucio with Software Defined Networking...

    Go to contribution page
  155. Khawla Jaffel (National Institute of Chemical Physics and Biophysics (EE))
    26/05/2026, 14:39
    Track 9 - Analysis software and workflows
    Oral Presentation

    One of the main challenges currently facing high energy particle physicists analyzing data from the Large Hadron Collider (LHC) at CERN is the unprecedented volume of both real data and simulated data that must be processed. This challenge is expected to intensify as the LHC enters its high luminosity phase, during which it is projected to deliver up to ten times more data than before. At the...

    Go to contribution page
  156. Richa Sharma (University of Puerto Rico (US))
    26/05/2026, 14:39
    Track 3 - Offline data processing
    Oral Presentation

    The CMS Pixel Detector in Run 3, with about 1400 silicon modules, is a central part of the Tracker, providing precise tracking and vertex reconstruction. Ensuring high quality data requires continuous monitoring, as modules can degrade or suffer operational issues. Traditionally, experts relied on a GUI that displayed histograms integrated over entire runs, making it difficult to spot...

    Go to contribution page
  157. Alexandre Franck Boyer (CERN)
    26/05/2026, 14:39
    Track 4 - Distributed computing
    Oral Presentation

    DiracX is the next incarnation of DIRAC. This is a modern, cloud‑native platform for managing distributed computing across multiple research infrastructures for one or more virtual organizations. Leveraging two decades of DIRAC experience, DiracX delivers a faster, more capable, and user‑friendly environment for scientists, administrators, and developers alike.

    In this contribution we build...

    Go to contribution page
  158. Dimitrios Danopoulos (CERN)
    26/05/2026, 14:39
    Track 2 - Online and real-time computing
    Oral Presentation

    We present the first implementation of a Continuous Normalizing Flow (CNF) model for unsupervised anomaly detection within the realistic, high-rate environment of the Large Hadron Collider's L1 trigger systems. While CNFs typically define an anomaly score via a probabilistic likelihood, calculating this score requires solving an Ordinary Differential Equation, a procedure too complex for FPGA...

    Go to contribution page
  159. David Giesegh (Belle II Experiment)
    26/05/2026, 14:39
    Track 5 - Event generation and simulation
    Oral Presentation

    One of the major goals of the Belle II Experiment is the search for rare decay processes, which manifest as tiny signals over large background contributions. Measuring such delicate signals with the highest possible precision requires not only large datasets from the actual experiment, but typically even larger simulated datasets for the development of such analyses.

    Since running the...

    Go to contribution page
  160. Andrew Malone Melo (Vanderbilt University (US))
    26/05/2026, 14:39
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    MLTF (Machine Learning Training Facility) is hardware and software deployed at Vanderbilt University with a focus on portability, reproducibility and ease of exploiting hardware features like RDMA. The software integrates MLflow as an end-to-end ML solution for its capabilities as a user-friendly job submission interface; as a custom-built tracking server for model and run details, arbitrary...

    Go to contribution page
  161. Sanjeeda Bharati Das, Sanjeeda Bharati Das (Torino University and INFN)
    26/05/2026, 14:39
    Track 3 - Offline data processing
    Oral Presentation

    The MANTRA (Measuring Anti-Neutron: Tagging and Reconstruction Algorithm for frontier experiments) is a PRIN 2022 Italian project which proposes a new method to measure the energy of anti-neutrons produced in high-energy physics experiments. Anti-neutrons cannot be reconstructed by the tracking systems; however, they can produce so-called annihilation stars in electromagnetic calorimeters,...

    Go to contribution page
  162. Anwar Ibrahim
    26/05/2026, 14:39
    Track 6 - Software environment and maintainability
    Oral Presentation

    We present RLABC, an open and extensible software framework for applying reinforcement learning (RL) to particle accelerator beamline optimization. The framework is designed to bridge modern RL libraries with established accelerator simulation tools, enabling reproducible and maintainable development of learning-based control solutions. RLABC integrates Python-based RL workflows with the...

    Go to contribution page
  163. Jack Charlie Munday, Ricardo Rocha (CERN)
    26/05/2026, 14:39
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The CERN Container Registry is built on Harbor, a graduated CNCF project capable of managing a wide range of OCI artifacts. It serves use cases at CERN as well as workloads and services across WLCG, and acts as a central registry for Harbor instances running in other WLCG sites. Today, it hosts container images, Helm charts, machine-learning models, SBOMs, and numerous other artifact types....

    Go to contribution page
  164. Diogo Castro (CERN)
    26/05/2026, 14:57
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    CERNBox is a leading participant in the emerging European sync-and-share federation effort, promoting interoperable, standards-based collaboration across scientific communities. As an active contributor to European E-Infrastructures, it plays a key role in shaping open, federated data services. This contribution will present recent work on integrating CERNBox into the current sync-and-share...

    Go to contribution page
  165. Deepak Aggrawal (University of Cambridge), Shaun de Witt
    26/05/2026, 14:57
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    HPC services are increasingly constrained by fixed on-premises capacity, long procurement cycles, and data centre infrastructure limitations. At the University of Cambridge, these pressures are amplified by rapidly evolving AI workloads, where researchers benefit from access to diverse compute resources, both CPU and GPU, often on short timescales. This work presents our approach to extending...

    Go to contribution page
  166. Eric Anton Moreno (Massachusetts Institute of Technology (US))
    26/05/2026, 14:57
    Track 2 - Online and real-time computing
    Oral Presentation

    Modern foundation models (FMs) have pushed the frontiers of language, vision, and multi-model tasks by training ever-larger neural networks (NN) on unprecedented volumes of data. The use of FM models has yet to be established in collider physics, which both lack a comparably sized, general-purpose dataset on which to pre-train universal event representations, and a clear demonstrable need....

    Go to contribution page
  167. Juraj Smiesko (CERN)
    26/05/2026, 14:57
    Track 9 - Analysis software and workflows
    Oral Presentation

    The Future Circular Collider (FCC) project requires an analysis infrastructure capable of handling large simulated datasets while providing the flexibility needed for rapid detector optimization. We present FCCAnalyses, the flagship analysis framework for the FCC collaboration. Integrated within the Key4hep software stack, FCCAnalyses leverages ROOT’s RDataFrame to provide a declarative,...

    Go to contribution page
  168. Shuang Wang (IHEP)
    26/05/2026, 14:57
    Track 3 - Offline data processing
    Oral Presentation

    Astronomical satellites serve as critical infrastructure in the field of astrophysics, and data processing is one of the most essential processes for conducting scientific research on cosmic evolution, celestial activities, and dark matter. Recent advancements in satellite sensor resolution and sensitivity have led to petabyte (PB)-scale data volumes, characterized by unprecedented scale and...

    Go to contribution page
  169. Krishna Bhatia
    26/05/2026, 14:57
    Track 3 - Offline data processing
    Oral Presentation

    Reliable short- to medium-horizon forecasts of cosmic-ray/neutron monitor count rates support detector operations, data-quality monitoring, and space-weather analyses, but modern deep sequence models can be costly to train and tune across stations and solar conditions. We present a practical Quantum Reservoir Computing (QRC) pipeline for sustainable time-series forecasting on neutron monitor...

    Go to contribution page
  170. Jay Ajitbhai Sandesara (University of Wisconsin Madison (US))
    26/05/2026, 14:57
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The development of Neural Simulation-Based Inference (NSBI) algorithm requires training a large ensemble of neural networks, on the order of one thousand, which makes a serial single-node approach impractical. To address this, we are developing a scalable high-throughput training workflow built around Snakemake[1] and deployed on an HTCondor-based GPU facility. Each neural network training...

    Go to contribution page
  171. Anna Zaborowska (CERN)
    26/05/2026, 14:57
    Track 5 - Event generation and simulation
    Oral Presentation

    Fast calorimeter shower simulation is an active field of study, with numerous models having been explored. Recently, several models have explored a point cloud representation of energy deposits, as opposed to the more common image-like voxelisation of a shower. However, direct use of the output from the detailed Geant4 simulation as an input to these machine learning models is computationally...

    Go to contribution page
  172. CMS Collaboration
    26/05/2026, 14:57
    Track 6 - Software environment and maintainability
    Oral Presentation

    CMS applications are generally complex: they can have up to many thousands of components that the CMSSW framework schedules that are run on tens of threads. Understanding the timing characteristics of such complex applications is difficult, especially if understanding correlations between the components is necessary. To aid in understanding the runtime behavior of the applications, CMS has...

    Go to contribution page
  173. Andrea Piccinelli (University of Notre Dame (US))
    26/05/2026, 14:57
    Track 4 - Distributed computing
    Oral Presentation

    The Compact Muon Solenoid (CMS) experiment is reassessing its Workload Management (WM) stack to meet HL-LHC scale, heterogeneity, and a 20–25-year sustainability horizon. Over the past year, we surveyed multiple pathways (including reuse of external WM systems, hybrid approaches, and a ground-up redesign) and developed a blueprint that emphasizes architectural principles of the HL-LHC WM...

    Go to contribution page
  174. Jolly Chen (CERN & University of Twente (NL))
    26/05/2026, 16:15
    Track 6 - Software environment and maintainability
    Oral Presentation

    C++ compile-time metaprogramming techniques - commonly known as “templates” - are extensively used in HEP code to write reusable code and perform optimisations at compile-time. In 2026, the new C++26 standard will be released, including major new compile-time programming features such as reflection, template for, constexpr if, constexpr allocations, consteval, etc. Reflection could make...

    Go to contribution page
  175. Tadej Novak (Jozef Stefan Institute (SI))
    26/05/2026, 16:15
    Track 5 - Event generation and simulation
    Oral Presentation

    Accurate modeling of the underlying event (UE) in heavy-ion collisions poses a significant challenge, particularly for analyses involving hard probes. No existing Monte Carlo (MC) simulation can reproduce the complex underlying physics. To address this, the ATLAS Collaboration developed an innovative technique that overlays simulated signal events onto real minimum-bias data recorded by the...

    Go to contribution page
  176. Borja Garrido Bear (CERN), Panos Paparrigopoulos (CERN)
    26/05/2026, 16:15
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    In preparation for Run-4 and the HL-LHC era, WLCG has initiated the redesign of its XRootD monitoring to provide a coherent and scalable view of data-access activity across distributed sites and experiments. Developed in close collaboration with CMS, the new architecture aims to serve both WLCG-level needs for global observability (such as assessing traffic patterns and validating large-scale...

    Go to contribution page
  177. Zhengde Zhang (中国科学院高能物理研究所)
    26/05/2026, 16:15
    Track 4 - Distributed computing
    Oral Presentation

    We present Dr.Sai, a large language model (LLM)-powered multi-agent system designed to autonomously execute physics analysis at BESIII experiment. It interprets a physicist’s natural language request, decomposes it into tasks (e.g., data skimming, fitting), calls the appropriate scientific tools, and executes the workflow end-to-end. A demonstration will show Dr.Sai completing multiple simple...

    Go to contribution page
  178. Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))
    26/05/2026, 16:15
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The rapid growth in data centre energy demand poses significant challenges for the sustainability of large-scale scientific computing. In alignment with CERN and WLCG strategies on environmentally responsible computing, this work investigates methods to reduce energy consumption, electricity costs, and CO₂ emissions at the PIC WLCG Tier-1 site through energy-aware compute resource...

    Go to contribution page
  179. Amy Byrnes
    26/05/2026, 16:15
    Track 3 - Offline data processing
    Oral Presentation

    The High-Luminosity Large Hadron Collider (HL-LHC) is expected to produce data at the exabyte scale, motivating the exploration of new methods for reducing data volumes. Error-bounded lossy compression has been adopted in many scientific domains as an effective strategy for reducing storage and I/O costs without compromising the quality of downstream analyses.

    However, selecting an...

    Go to contribution page
  180. Prabhat Solanki (Universita & INFN Pisa (IT))
    26/05/2026, 16:15
    Track 3 - Offline data processing
    Oral Presentation

    The upgrade of the CMS apparatus for the HL-LHC will provide unprecedented timing measurement capabilities, in particular for charged particles through the Mip Timing Detector (MTD). One of the main goals of this upgrade is to compensate the deterioration of primary vertex reconstruction induced by the increased pileup of proton-proton collisions by separating clusters of tracks not only in...

    Go to contribution page
  181. Silia Taider (CERN)
    26/05/2026, 16:15
    Track 9 - Analysis software and workflows
    Oral Presentation

    Machine learning (ML) techniques are increasingly adopted in the High Energy Physics (HEP) field from large-scale production workflows to end-user data analysis. As such, we see datasets growing in size and complexity, making data loading a significant performance bottleneck, particularly when training workloads access large, distributed datasets with sparse ML reading patterns.

    In HEP,...

    Go to contribution page
  182. Giovanni Guerrieri (CERN)
    26/05/2026, 16:15
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    CERN IT started providing the capability of an Analysis Facility (AF) in late 2023, initially as a pilot. The AF supports columnar workloads through RDataFrame and Coffea within SWAN, CERN’s web-based analysis environment. Dask provides the computing backend, managing concurrent resources from the CERN batch farm.
    Since then, the AF has evolved beyond the pilot phase. The latest developments...

    Go to contribution page
  183. Ernst Hellbär (CERN)
    26/05/2026, 16:15
    Track 2 - Online and real-time computing
    Oral Presentation

    The ALICE experiment at CERN continuously reads out and records data at interaction rates of up to 50 kHz of Pb-Pb collisions. Online processing and reconstruction play a vital role for handling the enormous amounts of data, compressing about 3.5 TB/s of detector raw data down to 160 GB/s of compressed input data for offline reconstruction. The online processing is performed on dedicated Event...

    Go to contribution page
  184. Pietro Lugato (Massachusetts Inst. of Technology (US))
    26/05/2026, 16:33
    Track 4 - Distributed computing
    Oral Presentation

    A2rchi (AI Augmented Research Chat Intelligence) is an open-source, end-to-end framework for building AI agents to automate research and operational workflows. Various groups have already applied the system to their use case; the most advanced is the Computing Operations (CompOps) team at the Compact Muon Solenoid (CMS) experiment at CERN. CompOps has a private, constantly evolving, and...

    Go to contribution page
  185. Cheng Jiang (The University of Edinburgh (GB))
    26/05/2026, 16:33
    Track 5 - Event generation and simulation
    Oral Presentation

    High-precision calorimeter simulation at current and future colliders puts growing demands on computing resources, motivating ML-based alternatives to traditional Monte Carlo tools such as Geant4. In practice, generative models based on flow matching and diffusion have become de facto standards for high-dimensional fast calorimeter simulation, thanks to their excellent fidelity and strong...

    Go to contribution page
  186. Rahul Chauhan (University of Wisconsin Madison (US))
    26/05/2026, 16:33
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The XRootD redirector plays a key Role in CMS Experiment's global data access infrastructure, determining where clients are sent to retrieve data across a heterogeneous, worldwide set of storage endpoints. The redirector has traditionally emphasised simplicity and performance; Its decisions tend to be opaque and based on limited inputs. This can lead to erroneous redirections, such as sending...

    Go to contribution page
  187. Henryk Giemza (Warsaw University of Technology)
    26/05/2026, 16:33
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    A comprehensive assessment of the environmental impact of the LHCb distributed computing requires a detailed understanding of its carbon footprint sources. This involves moving beyond a simple comparison of regional carbon intensity, as the hardware executing the jobs exhibits significant variation in both energy efficiency and computational performance in HEP tasks.

    In this work, we...

    Go to contribution page
  188. Florine Willemijn de Geus (CERN/University of Twente (NL))
    26/05/2026, 16:33
    Track 3 - Offline data processing
    Oral Presentation

    With the data deluge that is expected to come with the High-Luminosity LHC and limited storage resources, the need to reduce the on-disk file size of High-Energy Physics (HEP) data becomes even more pressing. Lossless compression algorithms and encodings are already extensively used across all experiments data tiers, leading to often significant reductions of the total on-disk data volume for...

    Go to contribution page
  189. Oksana Shadura (University of Nebraska Lincoln (US))
    26/05/2026, 16:33
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    As part of the IRIS-HEP software institute effort and U.S. CMS activities, the Coffea-Casa analysis facility team has executed an Integration Challenge. One goal of this challenge was to demonstrate a full CMS analysis running on the facility and to integrate the IRIS-HEP software stack into a production environment. We describe the solutions deployed at the facility to support and execute the...

    Go to contribution page
  190. Ching-Hua Li (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France)
    26/05/2026, 16:33
    Track 3 - Offline data processing
    Oral Presentation

    To achieve higher physics precision, the LHCb experiment is operating at an increased instantaneous luminosity in Run 3, leading to an unprecedented challenge in total data volume. A single proton-proton collision generates hundreds of tracks, yet the target signals involve only a few; this imbalance severely inflates the event data size. To efficiently reduce the event size while retaining...

    Go to contribution page
  191. Borja Sevilla Sanjuan (La Salle, Ramon Llull University (ES))
    26/05/2026, 16:33
    Track 9 - Analysis software and workflows
    Oral Presentation

    Flavour tagging (FT) is essential in heavy-flavour physics for determining the production flavour of neutral B mesons in time-dependent CP-violation and mixing parameter measurements, where it significantly impacts the sensitivity. For Run 3 of the LHC, the LHCb experiment has redesigned its FT strategy, exploiting recent advances in algorithm methodology and machine learning, including modern...

    Go to contribution page
  192. Matthias Kretz (GSI Helmholtzzentrum für Schwerionenforschung)
    26/05/2026, 16:33
    Track 6 - Software environment and maintainability
    Oral Presentation

    For HEP software, longevity is a core requirement: code often outlives several hardware generations. Using a standardized solution for data‑parallelism is therefore the most direct path to sustainable, reusable optimizations. As the lead author of std::simd in the C++ standard and the libstdc++ implementation, I will show how C++26’s std::simd provides a concrete, standards-based illustration...

    Go to contribution page
  193. Serguei Kolos (University of California Irvine (US))
    26/05/2026, 16:33
    Track 2 - Online and real-time computing
    Oral Presentation

    In LHC Run 3, several hundred thousand histograms are continuously updated during data taking and used by automated algorithms for data quality assessment. A subset of these histograms is also presented to experts. The current online histogram display, based on a standalone C++ application using ROOT and Qt, provides reliable functionality but offers limited integration with modern web...

    Go to contribution page
  194. Albert Gyorgy Borbely (University of Glasgow (GB))
    26/05/2026, 16:51
    Track 4 - Distributed computing
    Oral Presentation

    Recent developments demonstrate that HEP software can run effectively on
    GPUs, while advances in ML models have shown predictable scaling laws
    for compute, data, and model size, consistent with trends across the
    wider AI community. As a result, there is growing demand within HEP for
    inference using larger models that have already delivered significant
    physics gains, such as b-tagging...

    Go to contribution page
  195. Felice Pantaleo (CERN)
    26/05/2026, 16:51
    Track 3 - Offline data processing
    Oral Presentation

    The High-Luminosity LHC will vastly increase both the volume and complexity of data to be processed within the CMS software framework (CMSSW), pushing computational throughput to its limits. Efficient use of accelerator hardware, especially GPUs, will be central to sustaining reconstruction and analysis performance under these conditions. Among the most impactful design choices for...

    Go to contribution page
  196. James Letts (UCSD)
    26/05/2026, 16:51
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Contemporary research relies heavily on computational resources and storage, with data sharing serving as a critical element. Data access remains a central challenge. The Open Science Data Federation (OSDF) project aims to establish a global scientific data distribution network by leveraging the Pelican Platform and the National Research Platform (NRP). OSDF is based on the XRootD and Pelican...

    Go to contribution page
  197. Thomas Britton
    26/05/2026, 16:51
    Track 2 - Online and real-time computing
    Oral Presentation

    Maintaining high data quality in modern Nuclear and High Energy
    Physics experiments increasingly requires scalable, automated solutions
    as data rates and detector complexity continue to grow. Traditionally, hu-
    mans monitored data quality with varying skill sets and expertise, while
    any automation was typically overly bespoke, covering only specific de-
    tector systems or processes. These...

    Go to contribution page
  198. Ting-Hsiang Hsu (National Taiwan University (TW))
    26/05/2026, 16:51
    Track 9 - Analysis software and workflows
    Oral Presentation

    Precision studies of $\tau^+\tau^-$ production in $e^+e^-$ collisions at LEP provide a clean environment for investigating spin correlations and quantum information observables. In the DELPHI experiment, the process $e^+e^- \to Z \to \tau^+\tau^-$ is well measured, but reconstruction of the $\tau^+\tau^-$ rest frame is challenged by the presence of multiple neutrinos in the final state. This...

    Go to contribution page
  199. Mateusz Jakub Fila (CERN)
    26/05/2026, 16:51
    Track 6 - Software environment and maintainability
    Oral Presentation

    Julia has gained attention in high-energy physics (HEP) as a programming language that combines high-level expressiveness with competitive performance. This work explores its potential as a replacement for C++ in HEP applications, in particular in the context of trigger and reconstruction. The studies reported here include ahead-of-time compilation of jet reconstruction packages, a scheduling...

    Go to contribution page
  200. Norbert Neumeister (Purdue University (US))
    26/05/2026, 16:51
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The Purdue Analysis Facility (Purdue AF) is an interactive, Kubernetes-based computational platform that provides CMS researchers with a comprehensive set of tools and services for end-to-end development and execution of physics analyses. It serves both as a primary development environment for ongoing CMS Run 3 analyses and as a sandbox for testing novel software and data infrastructure...

    Go to contribution page
  201. Ze Chen
    26/05/2026, 16:51
    Track 5 - Event generation and simulation
    Oral Presentation

    The Jiangmen Underground Neutrino Observatory (JUNO) is a large-scale neutrino experiment using a 20-kt liquid scintillator Central Detector surrounded by a 35-kt water Cherenkov veto Detector, and an almost 1000-m2 plastic scintillator Top Tracker. Following the completion of detector commissioning, JUNO began physics data taking on August 26, 2025.

    The electronics simulation (ElecSim) is...

    Go to contribution page
  202. David Britton (University of Glasgow (GB))
    26/05/2026, 16:51
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The WLCG Sustainability Forum was set up in the summer of 2025, to build on the momentum generated by the WLCG Sustainability Workshop in December 2024 and the WLCG workshop plenary session on sustainability in April 2025. In this presentation we review the topics covered in the approximately monthly meetings and highlight community progress towards a better understanding of how to deliver LHC...

    Go to contribution page
  203. Lukasz Graczykowski (Warsaw University of Technology (PL))
    26/05/2026, 16:51
    Track 3 - Offline data processing
    Oral Presentation

    Identifying products of ultrarelativistic collisions delivered by the LHC and RHIC colliders is one of the crucial objectives of experiments such as ALICE and STAR, which are specifically designed for this task. They allow for a precise Particle Identification (PID) over a broad momentum range.

    Traditionally, PID methods rely on hand-crafted selections, which compare the recorded signal of...

    Go to contribution page
  204. Brian Paul Bockelman (University of Wisconsin Madison (US))
    26/05/2026, 17:09
    Track 4 - Distributed computing
    Oral Presentation

    For distributed High Throughput Computing (dHTC), the original -- and potentially still most popular -- interface for workflow management is the command line interface (CLI). Decades of researchers have been trained on the CLI and knowledgeable users can effectively integrate it into larger scripts with little friction. As the ecosystem has grown and matured, new interfaces have appeared...

    Go to contribution page
  205. Silia Taider (CERN)
    26/05/2026, 17:09
    Track 6 - Software environment and maintainability
    Oral Presentation

    High Energy Physics (HEP) software environments make extensive use of blended C++ and Python workflows, combining performance and simple interfaces. In this context, a C++ compiler stack comprising technologies such as Clang, Cling, and cppyy provides generic dynamic Python-C++ bindings and powers many of the Python interfaces used in the field, including those of the ROOT software...

    Go to contribution page
  206. Igor Soloviev (University of California Irvine (US))
    26/05/2026, 17:09
    Track 2 - Online and real-time computing
    Oral Presentation

    Since the beginning of LHC Run 2, the Trigger and Data Acquisition system of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN has provided an operational monitoring data archiving service used by thousands of online clients. During data-taking periods, this system publishes various operational monitoring data to continuously monitor the status of hardware and software components...

    Go to contribution page
  207. Florian Uhlig (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    26/05/2026, 17:09
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    DataHarbor is a modern web application designed to provide researchers with secure, intuitive access to large-scale data stored on distributed storage systems through the XRootD protocol. The system provides a web-based file browser that enables seamless directory navigation, metadata inspection, and on-demand file downloads. Files are streamed directly from XRootD storage to the user's...

    Go to contribution page
  208. Thomas Byrne
    26/05/2026, 17:09
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Large-scale scientific computing relies on cost-effective, high-capacity storage systems to support data-intensive workloads , such as those from the Worldwide LHC Computing Grid and future data-intensive sciences like the Square Kilometre Array Observatory. At STFC, we evaluated three Ceph-based storage configurations – 8TB HDD, 22TB HDD, and 15TB TLC NVMe flash. Using low level benchmarks...

    Go to contribution page
  209. Jingde Chen (Institute of High Energy Physics)
    26/05/2026, 17:09
    Track 9 - Analysis software and workflows
    Oral Presentation

    While Foundation Models have revolutionized natural language processing and computer vision, their potential in high-energy physics remains underutilized. In this work, we introduce Bes3T, a Transformer-based Foundation Model tailored for BESIII data analysis, and present a publicly released benchmark Monte Carlo dataset comprising 100 distinct $\mathrm{J}/\psi$ decay channels. Bes3T employs a...

    Go to contribution page
  210. Valerii Kholoimov (EPFL - Ecole Polytechnique Federale Lausanne (CH))
    26/05/2026, 17:09
    Track 3 - Offline data processing
    Oral Presentation

    Long-lived particles (LLPs) are present in many Standard Model extensions and could provide solutions to long-standing problems in modern physics. In this work, machine-learning based techniques are developed to probe for the presence of such particles, specifically Heavy Neutral Leptons (HNLs) and Axion-Like Particles (ALPs), decaying in the LHCb muon detector. Their decays will produce...

    Go to contribution page
  211. Marcin Nowak (Brookhaven National Laboratory (US))
    26/05/2026, 17:09
    Track 3 - Offline data processing
    Oral Presentation

    The ATLAS experiment has surpassed 1 exabyte of stored data, much of it managed through the Athena POOL Replacement (APR) persistency framework. Derived from the original LCG POOL project, APR has long provided a technology-independent abstraction layer that enabled seamless support for multiple backends, including ROOT TTree, TKey, and more recently RNTuple. While APR has proven remarkably...

    Go to contribution page
  212. CMS Collaboration
    26/05/2026, 17:09
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The upcoming High-Luminosity phase of the LHC will significantly increase the computational demands of CMS detector performance studies, particularly for workflows that process multi-year datasets and explore high pile-up conditions. In this context, modern data formats and scalable analysis paradigms are essential. This contribution presents an upgrade of a representative CMS detector...

    Go to contribution page
  213. Michał Mazurek (National Centre for Nuclear Research (PL))
    26/05/2026, 17:09
    Track 5 - Event generation and simulation
    Oral Presentation

    Experiments in high energy physics rely heavily on simulations to interpret data, optimise detector design, and test theoretical models. Traditionally, simulations involve Monte Carlo event generators and detailed particle interactions with detectors. For the LHCb experiment, 90 % of computing resources are used for simulations, with the calorimeter simulation being the most computationally...

    Go to contribution page
  214. Giacomo Parolini (CERN)
    26/05/2026, 17:27
    Track 6 - Software environment and maintainability
    Oral Presentation

    The ROOT file is the most widely used format for storing data in HEP. ROOT's TFile, alongside its ancillary
    classes, is the main interface to ROOT files, featuring a large number of functionalities both basic and advanced. TFile was designed in the 90's and evolved organically during the past 3 decades and it is still one of the pillars of any interaction with ROOT. However, 30 years of...

    Go to contribution page
  215. Sergio Andreozzi
    26/05/2026, 17:27
    Track 4 - Distributed computing
    Oral Presentation

    The EGI Federation, that emerged from WLCG in 2010, has been a cornerstone of European and global digital science for over 15 years, providing a federated e-infrastructure for 150,000+ researchers across all scientific disciplines. The recently approved “EGI Federation Strategy 2026–2030” sets out an ambitious plan for the next 5 years to ensure that EGI remains an accelerator for science....

    Go to contribution page
  216. Andrea Piccinelli (University of Notre Dame (US))
    26/05/2026, 17:27
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The Notre Dame CMS XRootD storage element, originally designed to handle traditional CMSSW workloads, underwent heavy I/O wait saturation when dealing with new data analysis workloads based on columnar analysis frameworks. These new workloads, using tools such as Uproot (to load data into structures such as Awkward Arrays), have revolutionized the I/O profile. This presentation starts by...

    Go to contribution page
  217. Filippo Cattafesta (Scuola Normale Superiore & INFN Pisa (IT))
    26/05/2026, 17:27
    Track 5 - Event generation and simulation
    Oral Presentation

    Detailed event simulation at the LHC is taking a large fraction of computing budget. CMS developed an end-to-end ML based simulation framework, called FlashSim, that can speed up the time for production of analysis samples of several orders of magnitude with a limited loss of accuracy. We show how this approach achieves a high degree of accuracy, not just on basic kinematics but on the complex...

    Go to contribution page
  218. Lars Sowa (KIT - Karlsruhe Institute of Technology (DE))
    26/05/2026, 17:27
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Modern computing sites need to operate on state-of-the-art hardware to achieve efficiency in both economic and environmental terms. As a consequence, sites accumulate substantial amounts of legacy equipment that is no longer competitive for continuous operation. However, this equipment still provides meaningful compute capacity and becomes attractive again when electricity prices are low or...

    Go to contribution page
  219. Tomas Raila (Vilnius University (LT))
    26/05/2026, 17:27
    Track 3 - Offline data processing
    Oral Presentation

    The High-Luminosity upgrade of the LHC (HL-LHC) will present an unprecedented computational challenge for the CMS experiment, with the average number of simultaneous proton-proton interactions (pileup) expected to reach 200 per bunch crossing. Accurately modeling this background environment requires the production of massive, high-fidelity simulated event datasets. Currently, CMS employs a...

    Go to contribution page
  220. Robert Laszlo Gulyas (CERN)
    26/05/2026, 17:27
    Track 2 - Online and real-time computing
    Oral Presentation

    The LHCb Online Mover is a critical component of the LHCb online computing stack, responsible for streaming data accepted by the High Level Trigger 2 (HLT2) from online storage to long-term offline infrastructure. During data-taking, data is produced at sustained rates of up to 20 GB/s, with bursts reaching 50 GB/s. For efficient long-term storage, the data must be compressed and packed into...

    Go to contribution page
  221. Siyang Wu (Shandong University)
    26/05/2026, 17:27
    Track 9 - Analysis software and workflows
    Oral Presentation

    Quantum Machine Learning (QML) is an advanced data analysis technique, which can detect data structures, building models to achieve data prediction, classification, or simulation, with less human intervention. However, for data analysis of high-energy physics (HEP) experiments, the practical viability of QML still remains a topic of debate, requiring more examples of real data analysis with...

    Go to contribution page
  222. Benjamin Galewsky (Univ. Illinois at Urbana Champaign (US))
    26/05/2026, 17:27
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    ServiceX is an experiment-agnostic service that extracts columnar data from HEP datasets at scale. Its Python SDK enables researchers to efficiently access complex experimental data by implementing best practices for large-scale dataset processing. Users submit requests using high-level query languages, which generate code that executes within experiment-approved container images, with...

    Go to contribution page
  223. Aurora Perego (Universita & INFN, Milano-Bicocca (IT))
    26/05/2026, 17:45
    Track 3 - Offline data processing
    Oral Presentation

    The extreme pileup conditions expected at the High-Luminosity LHC (HL-LHC) requires new technologies to cope with the higher occupancy. One of the strategies adopted to address this challenge is the usage of precise timing information in event reconstruction. The CMS experiment will introduce two new sub detectors with timing capabilities: the MIP Timing Detector (MTD) covering both barrel and...

    Go to contribution page
  224. Cilicia Uzziel Perez (La Salle, Ramon Llull University (ES))
    26/05/2026, 17:45
    Track 9 - Analysis software and workflows
    Oral Presentation

    We present a prototype Retrieval-Augmented Generation (RAG) and agentic LLM tool designed to accelerate and support high-energy physics analyses. As a case study, we applied the system to the published 2016 Λb → Λγ Run 2 analysis. Reproducing legacy workflows is often slow and error-prone due to fragmented code, dispersed documentation, personnel turnover, and software evolution over multiple...

    Go to contribution page
  225. Riccardo Farinelli (INFN Bologna (IT))
    26/05/2026, 17:45
    Track 5 - Event generation and simulation
    Oral Presentation

    PARSIFAL (PARametrized SImulation) is a software tool designed to reproduce the complete response of Gaseous Detectors. It models the involved physical processes through simple parametrization, thus achieving a fast processing times. Existing software, such as GARFIELD++, while robust and reliable, is highly CPU time-consuming. The development of PARSIFAL is motivated by the need to...

    Go to contribution page
  226. James Connaughton (University of Warwick (GB))
    26/05/2026, 17:45
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    Modern HEP analysis workflows are becoming increasingly complex and challanging. For LHCb, with its expanded Run 3 data volumes and growing analysis user base, reducing these barriers has become essential for efficient physics output. More recently, LHCb has moved to a declarative system for allowing analysts to filter datasets on WLCG resources for further analysis, known as "Analysis...

    Go to contribution page
  227. Torri Jeske
    26/05/2026, 17:45
    Track 2 - Online and real-time computing
    Oral Presentation

    Jefferson Lab is developing autonomous control systems for polarized cryogenic targets and linearly polarized photon beams, enabling stable, high-performance operation over extended experiment run periods. Historically, maintaining optimal polarization of these critical systems required manual tuning by expert operators. This process is sensitive to experience and prone to human error, and...

    Go to contribution page
  228. Arantza De Oyanguren Campos (Univ. of Valencia and CSIC (ES)), Arantza Oyanguren (IFIC - Valencia)
    26/05/2026, 17:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Measurements of power consumption and sustainability are an imperative matter in view of the next high luminosity era for the LHC collider, which will largely increase the output data rate to perform physics analysis. In the context of the High-Low project at IFIC in Valencia, and involving the ATLAS and LHCb experiments, several studies have been conducted to understand how to optimize the...

    Go to contribution page
  229. Mr Dhiraj Kalita (KEK (High Energy Accelerator Research Organization))
    26/05/2026, 17:45
    Track 4 - Distributed computing
    Oral Presentation

    The Belle II experiment at KEK, Japan, operates with data volume reaching over 30 petabytes, with datasets distributed and processed worldwide using DIRAC and Rucio. With the globally distributed computing infrastructure, and expecting an order of magnitude larger data volume, we face operational challenges for both computing experts and end-users. The end-users frequently struggle with...

    Go to contribution page
  230. Chin Guok (ESnet)
    26/05/2026, 17:45
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The rapid growth of data volumes in high-energy physics (HEP) collaborations, such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC), has necessitated the adoption of regional in-network caching strategies to mitigate data access latency. However, these caches often exhibit varying efficiencies across locations due to differing access patterns and storage...

    Go to contribution page
  231. Radoslaw Karabowicz (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    26/05/2026, 17:45
    Track 6 - Software environment and maintainability
    Oral Presentation

    FairRoot integrates RNTuple and allows users seamless transition to ROOT’s novel I/O backend, resulting in significant performance gains and file size decrease.

    This contribution details the incorporation of RNTuple into the FairRoot framework. The RNTuple novel columnar data storage is applied subsequently to experiment simulation and data reconstruction and allows for comparison between...

    Go to contribution page
  232. David Rohr (CERN)
    26/05/2026, 17:45
    Track 3 - Offline data processing
    Oral Presentation

    ALICE is the dedicated heavy ion experiment at the LHC at CERN recording lead-lead collisions at a rate of up to 50 kHz interaction rate.
    ALICE was the first LHC experiment to leverage GPUs for online data processing in LHC Runs 1 and 2, and its Run 3 online data processing scheme today is fully based on GPUs with more than 90% of the compute load offloaded to the accelerator.
    In order to...

    Go to contribution page
  233. Gordon Watts (University of Washington (US))
    27/05/2026, 09:00
    Track 9 - Analysis software and workflows
    Plenary Presentation

    Large Language Models (LLMs) are increasingly used in particle physics as coding agents, but their role is expanding from software assistance to building the scientific analysis workflow itself. This talk examines how LLMs can function as connective elements across the stages of a modern high-energy physics analysis, from dataset discovery and metadata retrieval to analysis specification,...

    Go to contribution page
  234. Ke LI
    27/05/2026, 09:30
    Track 4 - Distributed computing
    Plenary Presentation

    The integration of AI, especially Large Language Models (LLMs) and autonomous agents, is reshaping the way data-intensive research is conducted in HEP. This talk presents a vision of this transformation through Dr.Sai, a pioneering LLM-powered multi-agents system developed at BESIII. Dr.Sai interprets physicist's natural language queries, autonomously decomposes them into subtasks (e.g. data...

    Go to contribution page
  235. Savannah Thais (Hunter College)
    27/05/2026, 10:00
    Track 4 - Distributed computing
    Plenary Presentation

    Artificial intelligence is rapidly becoming inextricable from physics research, with growing attention now turning to the semi-autonomous roles AI agents might play in scientific discovery. Yet many of the measurements and evaluations underpinning AI research lack the rigor and reliability typically expected to support knowledge production in physics. In this talk, I will explore the risks...

    Go to contribution page
  236. Jose Carlos Luna Duran (CERN)
    27/05/2026, 11:00
    Track 4 - Distributed computing
    Plenary Presentation

    This talk explores the security implications of adopting AI in operational environments, with a strong focus on the often-underestimated risks of data leakage, model misuse, and over-trust in automated outputs. It emphasizes why human review and human-in-the-loop controls remain essential, specially when AI systems are integrated into security-critical workflows. The talk also introduces how...

    Go to contribution page
  237. Harris Tzovanakis (CERN)
    27/05/2026, 11:30
    Track 3 - Offline data processing
    Plenary Presentation

    This panel discussion will allow CHEP 2026 attendees to hear different perspectives on how Artificial Intelligence technologies and tools can impact researchers in high energy and nuclear physics. As will be discussed in the plenary and parallel program of the conference, AI is already impacting the way HEP researchers write code; formulate analysis workflows; make hardware purchasing...

    Go to contribution page
  238. Stefano Dal Pra (INFN)
    27/05/2026, 13:45
    Track 4 - Distributed computing
    Oral Presentation

    We describe a set of tools developed to ease the execution of large computing campaigns over multiple and different computing resource providers. The tool suite has been adopted to perform the All-sky Continuous GW search on the data of the fourth LIGO-Virgo-KAGRA Observation cycle (O4), running CPU payloads on the IGWN Grid, INFN-CNAF, ICSC Grid (based on HTCondor , with different...

    Go to contribution page
  239. Mohamed Aly (Princeton University (US)), Oksana Shadura (University of Nebraska Lincoln (US))
    27/05/2026, 13:45
    Track 9 - Analysis software and workflows
    Oral Presentation

    The upcoming High-Luminosity Large Hadron Collider (HL-LHC) at CERN will deliver an unprecedented volume of data for High Energy Physics (HEP). This wealth of information offers significant opportunities for scientific discovery, but its scale challenges traditional analysis workflows. In this talk, we present CMS analysis pipelines being developed to meet HL-LHC demands. These pipelines build...

    Go to contribution page
  240. Leonardo Monaco (University of Glasgow (GB))
    27/05/2026, 13:45
    Track 2 - Online and real-time computing
    Oral Presentation

    The High Luminosity Large Hadron Collider (HL-LHC) is scheduled to begin operation in 2030 and will increase the number of proton-proton collisions per bunch-crossing from around 60 to 200. The upgraded trigger system of the ATLAS experiment will record around 10kHz of the collisions to disk for physics analysis and this reduction is achieved with an L0 trigger that will feed the Event Filter...

    Go to contribution page
  241. Peter Elmer (Princeton University (US))
    27/05/2026, 13:45
    Track 6 - Software environment and maintainability
    Oral Presentation

    Due to their scale, complexity and cost, large physics/astrophysics projects are very often international “team-science” endeavors. These scientific communities have been learning how to build collaborations that build upon regional capabilities and interests over decades, iteratively with each new generation of large scientific facilities required to advance their scientific knowledge....

    Go to contribution page
  242. Prof. Vladimir Ivantchenko (CERN)
    27/05/2026, 13:45
    Track 5 - Event generation and simulation
    Oral Presentation

    In this presentation we review recent updates in Geant4 electromagnetic (EM) physics sub-libraries in view of Run4 and other collider experiments. The evolution of EM sub-libraries is performed in order to make code more robust, compact, and compatible with requirements of Run4 detectors at LHC and other future collider experiments. A significant role in this respect is taken by the G4HepEm...

    Go to contribution page
  243. Daniele Massaro (CERN)
    27/05/2026, 13:45
    Track 5 - Event generation and simulation
    Oral Presentation

    The High-Luminosity LHC will reach unprecedented precision in the measurements of key observables in proton-proton collisions. To accurately predict the rates of such collision events, the simulation of the hard scattering event must include higher-order corrections, in particular Next-to-Leading Order (NLO) terms in the perturbative expansion of the cross section.
    The computational...

    Go to contribution page
  244. Dr Oliver Gregor Rietmann (CERN)
    27/05/2026, 13:45
    Track 3 - Offline data processing
    Oral Presentation

    In high performance computing, we strive for algorithms on large arrays to be as performant as possible. However, the performance of such an algorithm is also affected by the memory layout of these arrays. The most natural memory layout is Array-of-Structures (AoS), which performs well for strided access patterns and for large classes. On the other hand, Structures-of-Array (AoS) allows for...

    Go to contribution page
  245. zhuo meng (Institute of High Energy Physics)
    27/05/2026, 13:45
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Currently High Energy Physics (HEP) faces increasingly severe data storage challenges. Next-generation particle collider experiments are expected to generate unprecedented data volumes and acquisition rates, demanding continuous I/O capabilities with sub-milliseconds PB/s-level throughput. Traditional kernel-based file systems, burdened by context switching, interrupt handling, and heavy...

    Go to contribution page
  246. Ryan Taylor (University of Victoria (CA))
    27/05/2026, 13:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The increasing computational scale and complexity of frontier scientific experiments, such as the ATLAS experiment at the Large Hadron Collider, continues to motivate a drive toward operational models that are resilient, automated, reproducible, and scalable. The University of Victoria (UVic) remains at the forefront of advancing cloud-native deployment patterns to address these challenges....

    Go to contribution page
  247. Ryunosuke O'Neil (CERN)
    27/05/2026, 14:03
    Track 4 - Distributed computing
    Oral Presentation

    Delivering reproducible computational workflows across heterogeneous and distributed computing infrastructures remains a significant challenge for many scientific communities. Workflow standards such as the Common Workflow Language (CWL) offer a portable and declarative means to describe complex pipelines but their integration into large-scale, data-driven workload management systems remains...

    Go to contribution page
  248. Alessandro Zaio (INFN e Universita Genova (IT))
    27/05/2026, 14:03
    Track 2 - Online and real-time computing
    Oral Presentation

    Trigger systems enable to quickly inspect the reconstructed physical quantities obtained from collisions at hadron colliders, in order to decide whether to save the corresponding detector data for offline analysis. The processing of the data coming from pixel detectors is a crucial challenge for the experiments running at the Large Hadron Collider (LHC) at CERN, because of the large number of...

    Go to contribution page
  249. Izaac Sanderswood (Univ. of Valencia and CSIC (ES))
    27/05/2026, 14:03
    Track 3 - Offline data processing
    Oral Presentation

    Precise reconstruction of particle decay chains is an essential tool for a wide range of analyses in particle physics experiments, particularly those focused on flavour dynamics and CP violation. We present a novel decay tree reconstruction framework designed to handle complex topologies with deeply constrained particle decays, trajectory extrapolations over long distances inside regions with...

    Go to contribution page
  250. Xinnan Wang (IHEP)
    27/05/2026, 14:03
    Track 5 - Event generation and simulation
    Oral Presentation

    To meet the requirements of enhanced radiation tolerance and sustained tracking performance, the BESIII inner tracker has been upgraded to a Cylindrical Gas Electron Multiplier (CGEM). We have developed a comprehensive simulation framework for the CGEM response, featuring a realistic digitization model refined with experimental data. The framework simulates the full signal-formation chain...

    Go to contribution page
  251. Andrea Valassi (CERN)
    27/05/2026, 14:03
    Track 5 - Event generation and simulation
    Oral Presentation

    The first production release of the CUDACPP plugin for the Madgraph5_aMC@NLO generator, which speeds up matrix element (ME) calculations for leading-order (LO) processes using a data parallel approach on vector CPUs and GPUs, was delivered in October 2024. This was described at CHEP2024 and in other previous publications by the team behind that effort. In this CHEP2026 contribution, I present...

    Go to contribution page
  252. Inga Katarzyna Lakomiec (Georg August Universitaet Goettingen (DE))
    27/05/2026, 14:03
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    German computing sites play a vital role in the Large Hadron Collider (LHC) job processing and data storage as part of the Worldwide LHC Computing Grid (WLCG). The storage and computing contributions of university-based Tier-2 centres in Germany are transitioning to the Helmholtz Centres and National High Performance Computing (NHR) sites, respectively, to meet the growing data and...

    Go to contribution page
  253. Alexander Held (University of Wisconsin Madison (US)), Artur Cordeiro Oudot Choi (University of Washington (US))
    27/05/2026, 14:03
    Track 9 - Analysis software and workflows
    Oral Presentation

    The last few years have seen a wide range of developments towards scalable solutions for end-user physics analysis to meet the upcoming HL-LHC computing challenges. The IRIS-HEP software institute has created projects in a “Challenge” format to checkpoint the progress. The “Analysis Grand Challenge” probes analysis workflows and interfaces with a limited dataset size, while the “200 Gbps...

    Go to contribution page
  254. Caterina Doglioni (The University of Manchester (GB))
    27/05/2026, 14:03
    Track 6 - Software environment and maintainability
    Oral Presentation

    The EU-funded EVERSE project aims to establish a framework for research software and code excellence, collaboratively designed and championed by five European research communities, including physics and astronomy.EVERSE’s ultimate ambition is to contribute towards a cultural change where research software is recognized as a first-class citizen of the scientific process and the people that...

    Go to contribution page
  255. Jose Flix Molina (CIEMAT - Centro de Investigaciones Energéticas Medioambientales y Tec. (ES))
    27/05/2026, 14:03
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The use of the networking protocol IPv6 on the Worldwide Large Hadron Collider Computing Grid (WLCG) storage is very successful and has been presented at earlier CHEP conferences. The campaign to deploy IPv6 on CPU services and worker nodes is going well. Dual-stack IPv6/IPv4 is not, however, a viable long-term solution; the ultimate goals include allowing WLCG sites to move completely to...

    Go to contribution page
  256. Mattias Wadenstein (University of Umeå (SE))
    27/05/2026, 14:21
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The driver for phasing out IPv4 in the Nordic Tier-1 site (NT1, aka NDGF-T1) sooner rather than later is that we forsee a significant risk of running out of IPv4 addresses when scaling storage servers horizontally in order to handle the High Luminosity LHC (HL-LHC) data rates. We expect to have a data rate of 10-20 times when HL-LHC comes online in 2030, and the most cost-effective way to...

    Go to contribution page
  257. Noemi Calace (CERN)
    27/05/2026, 14:21
    Track 3 - Offline data processing
    Oral Presentation

    The ATLAS experiment is undertaking a major modernisation of its reconstruction software to meet the demanding conditions of High-Luminosity LHC (HL-LHC) operations. A key element of this effort is the use of the experiment-independent ACTS toolkit for track reconstruction, which requires a major redesign of several parts of the current ATLAS software. This contribution will describe the ACTS...

    Go to contribution page
  258. Hugo Gonzalez Labrador (CERN)
    27/05/2026, 14:21
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Large-scale scientific collaborations such as WLCG need reliable and secure data transfers that optimize the available bandwidth and resources of the grid. HTTP-based third-party copy (TPC) transfers follow a de-facto community standard for moving files directly between storage endpoints (peer-to-peer). Here we report on an extension to that standard promoting improved data integrity through...

    Go to contribution page
  259. Pelayo Leguina (Universidad de Oviedo (ES))
    27/05/2026, 14:21
    Track 5 - Event generation and simulation
    Oral Presentation

    The generation of hard-scattering events in high-energy physics, is one of the computational bottlenecks in collider phenomenology. MadGraph provides a flexible framework to evaluate these matrix elements, but the sheer scale of Monte Carlo event production required at the LHC drives both execution time and power consumption to critical levels. In this work, we explore the use of Adaptive...

    Go to contribution page
  260. Matthew Feickert (University of Wisconsin Madison (US))
    27/05/2026, 14:21
    Track 6 - Software environment and maintainability
    Oral Presentation

    The packaging of high energy physics software with robust, yet flexible, distribution methods is a complicated problem that has been met with multiple approaches by the community. The HEP Packaging Coordination community project expands packaging of the HEP software ecosystem through building and distributing language-agnostic conda packages on...

    Go to contribution page
  261. David Schultz (University of Wisconsin-Madison)
    27/05/2026, 14:21
    Track 4 - Distributed computing
    Oral Presentation

    After a long delay and false starts, the IceCube Neutrino Observatory has removed GridFTP and x509 certificate authentication. We have migrated to using the Pelican Platform, the Open Science Data Federation, and WLGC Tokens. While this is a common solution, we required several customizations to work with our existing data warehouse structure and make it easier for scientists to use. We...

    Go to contribution page
  262. Wojciech Krupa (CERN)
    27/05/2026, 14:21
    Track 5 - Event generation and simulation
    Oral Presentation

    Gaussino is an experiment-independent HEP simulation code built on top of the Gaudi software framework. It provides generic components and interfaces for event generation, detector simulation, geometry, monitoring and output. In this talk we give an overview of recent developments in Gaussino, and some examples of their adoption in the LHCb Simulation since our previous report at CHEP2024. In...

    Go to contribution page
  263. Gagik Gavalian (Jefferson National Lab)
    27/05/2026, 14:21
    Track 2 - Online and real-time computing
    Oral Presentation

    Charged-particle track reconstruction is a central component of nuclear physics experiments, providing the foundation for identifying and analyzing particles produced in high-energy interactions. While traditional techniques—such as pattern-recognition algorithms and Kalman-filter–based tracking—have long been the standard, modern machine learning (ML) methods are increasingly addressing the...

    Go to contribution page
  264. Artur Cordeiro Oudot Choi (University of Washington (US))
    27/05/2026, 14:21
    Track 9 - Analysis software and workflows
    Oral Presentation

    As the HL-LHC prepares to produce increasingly large volumes of data, the need for efficient data extraction and access services is growing. To address this challenge, the ServiceX toolset was developed to connect user-level analysis workflows to remotely stored datasets. ServiceX functions as a query-based sample delivery system, where client requests trigger Kubernetes-distributed workloads...

    Go to contribution page
  265. Andreea Prigoreanu (IT-SD)
    27/05/2026, 14:39
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Author: Andreea Prigoreanu (University Politechnica Bucharest)
    on behalf of the ALICE collaboration

    The processing of ALICE experiment data relies on high-quality and reliable storage. The central file catalogue serves as the database that tracks over 2.6 billion files and their locations across more than 50 storage elements on the ALICE Grid. It is essential that the physical storage...

    Go to contribution page
  266. Dr Yu Hu (IHEP, CAS)
    27/05/2026, 14:39
    Track 9 - Analysis software and workflows
    Oral Presentation

    The High Energy Photon Source (HEPS) is a fourth-generation, high-energy synchrotron radiation facility scheduled to enter its early operational and commissioning phases by the end of 2025. With its significantly enhanced photon brightness and detector performance, HEPS is expected to generate over 200 petabytes (PB) of experimental data annually across 14 beamlines in Phase I, with data...

    Go to contribution page
  267. Takuya Kumaoka (University of Tsukuba (JP))
    27/05/2026, 14:39
    Track 2 - Online and real-time computing
    Oral Presentation

    The Electron-Ion Collider (EIC) will introduce new paradigms in large-scale nuclear physics experiments. With luminosities reaching up to 10³⁴ cm⁻²s⁻¹, the ePIC experiment must process extremely large data volumes and therefore adopts a flexible, scalable, and efficient streaming data acquisition model. This system replaces custom level-1 trigger electronics, enables the use of commercial...

    Go to contribution page
  268. Panos Paparrigopoulos (CERN)
    27/05/2026, 14:39
    Track 4 - Distributed computing
    Oral Presentation

    The Computing Resource Information Catalogue (CRIC) is a central element of the WLCG information ecosystem and a key operational tool for ATLAS Distributed Computing, providing authoritative, experiment-oriented views of sites, services, data-management endpoints and configuration parameters across distributed infrastructures. In preparation for HL-LHC, CRIC has undergone a major evolution: a...

    Go to contribution page
  269. Jan Stark (Laboratoire des 2 Infinis - Toulouse, CNRS / Univ. Paul Sabatier (FR))
    27/05/2026, 14:39
    Track 3 - Offline data processing
    Oral Presentation

    The High-Luminosity LHC (HL-LHC) will bring large increases in collision rate and pile-up. This represents a significant surge in both data quantity and complexity. In addition to excellent physics performance, a high computational efficiency is critical to fully exploit the HL-LHC
    datasets. In response, substantial R&D efforts in machine learning (ML) have been initiated by the ATLAS...

    Go to contribution page
  270. George Hallett (University of Warwick (GB))
    27/05/2026, 14:39
    Track 6 - Software environment and maintainability
    Oral Presentation

    The LHCb Analysis Productions system provides a large scale, centralised, and reproducible framework for executing analysis workflows on the grid using officially released LHCb software. However, some analyses require prototyping or development of custom modifications to core packages, which cannot easily be deployed within the standard release cycle. It is therefore desirable to enable...

    Go to contribution page
  271. Marco Andrea Battaglieri (INFN e Universita Genova (IT))
    27/05/2026, 14:39
    Track 5 - Event generation and simulation
    Oral Presentation

    Modern accelerator facilities operating at the intensity frontier—such as CERN, Jefferson Lab, and the forthcoming EIC—produce petabyte-scale datasets that probe the structure of visible matter at the femtometer scale. Fully exploiting and preserving this information requires new AI-driven strategies for data analysis and modeling. We present a program to develop Machine-Learning-based Physics...

    Go to contribution page
  272. Jing Chen (Sun Yat-sen University)
    27/05/2026, 14:39
    Track 5 - Event generation and simulation
    Oral Presentation

    The Jiangmen Underground Neutrino Observatory (JUNO) is a 20-kt liquid-
    scintillator neutrino detector in China, ~53 km from two nuclear power plant complexes. It aims to determine the neutrino mass ordering and precisely measure neutrino oscillation parameters, while enabling studies on solar, atmospheric, geoneutrino, and supernova neutrino physics. The detector construction was completed,...

    Go to contribution page
  273. Anna Giannakou
    27/05/2026, 14:39
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    High-energy physics experiments routinely perform petabyte-scale file transfers across distributed grid sites while simultaneously streaming data for interactive analysis, making traffic type differentiation critical for network orchestration, bandwidth forecasting, and responsiveness to operational demands. We present a machine learning–based traffic classification system that requires no...

    Go to contribution page
  274. LI Haibo lihaibo
    27/05/2026, 14:57
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    In high energy physics (HEP) experiments, large-scale storage clusters typically comprise tens of thousands of disks, and their reliability is essential for continuous data acquisition, processing, and long-term preservation. Traditional rule-based disk failure detection approaches are increasingly insufficient for such environments due to heterogeneous device types, complex workload patterns,...

    Go to contribution page
  275. Dr Maxim Gonchar (Joint Institute for Nuclear Research)
    27/05/2026, 14:57
    Track 9 - Analysis software and workflows
    Oral Presentation

    The Daya Bay Reactor Neutrino experiment has released its full dataset of neutrino interactions with the final-state neutron captured on gadolinium, collected during 9 years of operation. The dataset was complemented by a model of the experiment in Python and a few analysis examples, reproducing the final measurement of neutrino oscillation parameters...

    Go to contribution page
  276. Dominik Duda (The University of Edinburgh (GB))
    27/05/2026, 14:57
    Track 5 - Event generation and simulation
    Oral Presentation

    FastChain is a key component of ATLAS preparations for Run 4, providing a unified, configurable framework that integrates simulation, reconstruction, and downstream data reduction into a single end-to-end workflow. By eliminating intermediate data formats and enabling tight coupling between workflow stages, FastChain improves resource utilization efficiency and reduces disk I/O.

    To improve...

    Go to contribution page
  277. Dr Gavin Davies (University Of Mississippi)
    27/05/2026, 14:57
    Track 6 - Software environment and maintainability
    Oral Presentation

    The NOvA experiment has delivered world-leading neutrino physics results over ten years, enabled by an evolving software and computing infrastructure that has adapted to major technical transitions while maintaining operational stability. This talk discusses how NOvA has integrated modern AI/ML workflows into traditional HEP pipelines and balanced innovation against the demands of continuous...

    Go to contribution page
  278. Eleni Xochelli (Universitat Autonoma de Barcelona (ES))
    27/05/2026, 14:57
    Track 2 - Online and real-time computing
    Oral Presentation

    The upcoming high-luminosity phase of the LHC (HL-LHC) presents several challenges for the ATLAS experiment's Trigger and Data Acquisition system, necessitating a
    full upgrade of the system. A key challenge for the Event Filter, where high-level event reconstruction and final event selection will run at 1 MHz, lies in the computational demand for online track reconstruction within the Inner...

    Go to contribution page
  279. Marian Babik (CERN), Tristan Sullivan (University of Victoria (CA))
    27/05/2026, 14:57
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Research and Education Networks (RENs) transport vast amounts of scientific data, but gaining granular visibility into this traffic is difficult. Understanding the composition of this traffic is essential for enabling efficient network use, traffic steering, future provisioning, and capacity planning. Traditional network flow data offers only limited insight into the specific activities...

    Go to contribution page
  280. Sam Young
    27/05/2026, 14:57
    Track 3 - Offline data processing
    Oral Presentation

    Liquid argon time projection chambers (LArTPCs) provide dense, high-fidelity 3D measurements of particle interactions and underpin many current and future neutrino and rare-event experiments. Event reconstruction typically relies on complex detector-specific pipelines that use tens of hand-engineered pattern recognition algorithms or cascades of task-specific neural networks that require...

    Go to contribution page
  281. qi luo (中科院高能物理所计算中心)
    27/05/2026, 14:57
    Track 4 - Distributed computing
    Oral Presentation

    The High Energy cosmic Radiation Detection facility (HERD) is a long-term space-based high-energy physics experiment onboard the China Space Station, expected to produce large and heterogeneous datasets, including flight data, simulation data, and multi-version reconstructed data. To efficiently support large-scale computing and long-term physics analysis, a unified data management and...

    Go to contribution page
  282. Dr Florian Rehm (CERN), Mr Luke Jason van Leijenhorst (CERN)
    27/05/2026, 16:15
    Track 6 - Software environment and maintainability
    Oral Presentation

    Efficiently retrieving knowledge from particle physics research and documentation within CERN presents significant challenges due to specialized terminology and complex structural dependencies. This work presents the evolution of AccGPT, a CERN internal knowledge retrieval system, moving beyond baseline Retrieval-Augmented Generation (RAG) to address these limitations. We introduce a composite...

    Go to contribution page
  283. Jiri Chudoba (Czech Academy of Sciences (CZ))
    27/05/2026, 16:15
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The Czech WLCG Tier-2 reliably delivers computing and storage pledges to the LHC experiments through a geographically distributed infrastructure. CZ-Tier-2 resources are deployed across three sites and interconnected by high-capacity links provided by the Czech NREN, CESNET. In addition, significant CPU capacity from the Czech national supercomputing center IT4I is integrated into WLCG...

    Go to contribution page
  284. Juan Gonzalez Caminero (CERN)
    27/05/2026, 16:15
    Track 5 - Event generation and simulation
    Oral Presentation

    The use of heterogeneous CPU–GPU architectures is becoming an increasingly important consideration for LHC experiments in view of the growing computing demands of the HL-LHC era. WLCG sites and LHC experiments must make decisions in the short to medium term on the deployment and integration of GPUs, in order for these resources to be available and effectively exploited for HL-LHC operations. A...

    Go to contribution page
  285. Dr Stefano Bagnasco (Istituto Nazionale di Fisica Nucleare, Torino)
    27/05/2026, 16:15
    Track 4 - Distributed computing
    Oral Presentation

    The LIGO–Virgo–KAGRA (LVK) Collaboration closed its fourth observation period (O4) in November 2025, its longest and richest to date. During O4, the detectors observed roughly 250 gravitational-wave candidate signals in real time, and more are extracted from the data by offline analysis. Outstanding results were, for example, the first detection of “second generation” black holes, in which the...

    Go to contribution page
  286. Jonas Hahnfeld (CERN & Goethe University Frankfurt)
    27/05/2026, 16:15
    Track 9 - Analysis software and workflows
    Oral Presentation

    Many HEP analyses rely on histograms for statistical interpretation of the experimental data, and use them as data structures that can be computed with, in addition to their visual aspects. ROOT’s histogram package was developed in the 90’s and has been widely used during the past 30 years. Despite its success, the design starts to show limitations for modern analyses and the classes lack some...

    Go to contribution page
  287. Stefan Krischer (RWTH Aachen University)
    27/05/2026, 16:15
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The rapidly growing energy demand of large-scale scientific computing infrastructures could significantly impact the environmental footprint of future experiments. For the Einstein Telescope (ET), sustainability is therefore a key design criterion from an early stage. The SCOPE project (Sustainable Computing Prototype for the Einstein Telescope) addresses this challenge by developing and...

    Go to contribution page
  288. Jiahui Zhuo (Univ. of Valencia and CSIC (ES))
    27/05/2026, 16:15
    Track 2 - Online and real-time computing
    Oral Presentation

    In Run 3 data taking, the LHCb experiment at CERN operates with a fully software-based first level trigger (HLT1) on GPUs that processes 30 million collision events per second with a data throughput of 4 TB/s. Realtime track reconstruction is essential for HLT1 because most trigger decisions rely on reconstructed tracks or on higher level objects built from them, such as secondary...

    Go to contribution page
  289. Prof. Ziyan Deng
    27/05/2026, 16:15
    Track 3 - Offline data processing
    Oral Presentation

    The BESIII experiment has been operating since 2009 and has received several upgrades, to study the tau-charm physics utilizing the BEPCII accelerator. Both the BEPCII accelerator and BESIII detector have been upgraded during these years. The BESIII offline software system is developed based on Gaudi framework, provides the fundamental basis for physics analysis.
    This talk focuses on the...

    Go to contribution page
  290. Mr Ivan Knezevic (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    27/05/2026, 16:15
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The NAPMIX project aims to establish a cross-domain FAIR-compliant metadata schema for the Nuclear, Astro, and Particle (NAP) physics communities. A core challenge is reconciling the evolving nature of experimental metadata, enriched progressively from proposal through analysis, with the immutability required by Persistent Identifiers (DOIs) for findability and interoperability. This...

    Go to contribution page
  291. Francesco Giacomini (INFN CNAF)
    27/05/2026, 16:15
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The StoRM system provides storage services for scientific communities relying on distributed computing infrastructures through multiple loosely-coupled components developed in different programming languages at INFN-CNAF, including StoRM WebDAV and StoRM Tape. StoRM WebDAV is a StoRM component which provides HTTP/WebDAV access to distributed storage systems, while StoRM Tape is an...

    Go to contribution page
  292. Dmitry Litvintsev (Fermi National Accelerator Lab. (US)), Marina Sahakyan, Mr Tigran Mkrtchyan (DESY)
    27/05/2026, 16:33
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The dCache project provides an open-source, highly scalable distributed storage system deployed at numerous laboratories worldwide. Its modular architecture supports high-rate data ingestion, WAN data distribution, efficient HPC access, and long-term archival storage. Although initially developed for high-energy physics, dCache now serves a broad range of scientific communities with diverse...

    Go to contribution page
  293. Manfred Peter Fackeldey (Princeton University (US))
    27/05/2026, 16:33
    Track 9 - Analysis software and workflows
    Oral Presentation

    The community's adoption of Hist and boost-histogram, both part of the Scikit-HEP software stack, leads to increasingly frequent work with dense, high-dimensional histograms. These histograms become a memory bottleneck in modern large-scale high-energy physics (HEP) analyses because they become exceedingly large due to the cartesian product of all axes.
    To solve this problem, we propose...

    Go to contribution page
  294. Thomas Owen James (CERN)
    27/05/2026, 16:33
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The High-Luminosity LHC (HL-LHC) era will confront particle physics experiments with unprecedented challenges in data volume, computational complexity, and real-time decision making. Preparing for this paradigm shift requires innovation across the full computing and triggering stack. Within this context, CERN openlab plays a central role in exploring and validating emerging technologies in...

    Go to contribution page
  295. Dr Victoria Tokareva (Karlsruhe Institute of Technology)
    27/05/2026, 16:33
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The PUNCH4NFDI consortium (Particles, Universe, NuClei and Hadrons for German National Research Data Infrastructure) comprises astro-, astroparticle, particle and nuclear physics—communities historically employing computationally-intensive research on big data. The data life cycles are characterized by worked out data curation practices; highly diverse metadata, being embedded in custom file...

    Go to contribution page
  296. Tao Lin (Chinese Academy of Sciences (CN))
    27/05/2026, 16:33
    Track 3 - Offline data processing
    Oral Presentation

    The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment designed to determine the neutrino mass ordering and to achieve high-precision measurements of neutrino oscillation parameters. Construction of the JUNO detector was completed at the end of 2024, followed by commissioning of the water phase and the subsequent liquid scintillator filling phase. Physics...

    Go to contribution page
  297. Dr Jerome LAURET (Brookhaven National Laboratory)
    27/05/2026, 16:33
    Track 6 - Software environment and maintainability
    Oral Presentation

    Large-scale nuclear and particle physics experiments face a dual preservation challenge: maintaining long-term access to vast data volumes and the tacit scientific knowledge embedded in internal, often private or restricted, collaboration records. Public large language models (LLMs) cannot address this need for private data. To solve this, we developed SciBot, a locally deployed,...

    Go to contribution page
  298. Ioannis Maznas (A)
    27/05/2026, 16:33
    Track 2 - Online and real-time computing
    Oral Presentation

    The upcoming high-luminosity phase of the LHC (HL-LHC) presents several challenges for the ATLAS experiment's Trigger and Data Acquisition system, necessitating a full upgrade of the system. A key challenge for the Event Filter, where high-level event reconstruction and final event selection will run at 1 MHz, lies in the computational demand for online track reconstruction within the Inner...

    Go to contribution page
  299. Paul James Laycock (Universite de Geneve (CH))
    27/05/2026, 16:33
    Track 4 - Distributed computing
    Oral Presentation

    The Einstein Telescope (ET) will be the next-generation European underground Gravitational Wave (GW) observatory, designed to open a new observational window on the Universe starting in the mid to late 2030s. Building upon the experience of current GW detectors such as LIGO and Virgo, ET will achieve a significant increase in sensitivity, enabling the detection of a much larger number of GW...

    Go to contribution page
  300. Ivan Glushkov (Brookhaven National Laboratory (US))
    27/05/2026, 16:51
    Track 4 - Distributed computing
    Oral Presentation

    ATLAS Distributed Computing (ADC) is the set of infrastructure, software stack and experts that handle up to 1 million computing slots and over 1 EB of stored data in order to facilitate the computing needs of the ATLAS experiment at the LHC accelerator. After short description of the ADC structure and operational performance, this contribution focuses on the latest ADC innovations as well as...

    Go to contribution page
  301. Lael Verace (University of Wisconsin-Madison (US))
    27/05/2026, 16:51
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The next generation of scientific experiments, particularly those found in high energy and nuclear physics, will produce unprecedented data volumes which will push scientific computing infrastructures to rely on terabit-scale networks for rapid, reliable data movement between globally distributed facilities. In parallel, advances in artificial intelligence continue to significantly increase...

    Go to contribution page
  302. Dario Barberis (University of California Berkeley (US))
    27/05/2026, 16:51
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The ATLAS EventIndex is the global catalogue of all real and simulated data produced and processed by ATLAS. The current implementation, developed and deployed for LHC Run 3 (2022-2026) has to evolve in order to be able to ingest, store and serve the much larger amount of data that will be produced during the High-Luminosity LHC operation years, starting in 2030. The modular architecture of...

    Go to contribution page
  303. CMS Collaboration
    27/05/2026, 16:51
    Track 5 - Event generation and simulation
    Oral Presentation

    During CERN LHC Run 3 data taking, the CMS Geant4-based full simulation was upgraded a few times. The Geant4 version was changed from 10.7.2 to 11.2.2. Other libraries used for the Monte Carlo simulation of CMS---CLHEP, DD4hep, VecGeom---were also updated. A new library G4HepEm was adopted for the CMS simulation, improving CPU performance both for Run 3 and Run 4. In this work, we discuss...

    Go to contribution page
  304. Felix Philipp Zinn (Rheinisch Westfaelische Tech. Hoch. (DE))
    27/05/2026, 16:51
    Track 9 - Analysis software and workflows
    Oral Presentation

    In high energy physics (HEP), the measurement of physical quantities often involves intricate data analysis workflows that include the application of kinematic cuts, event categorization, machine learning techniques, and data binning, followed by the setup of a statistical model. Each step in this process requires careful selection of parameters to optimize the outcome for statistical...

    Go to contribution page
  305. Marco Riggirello (Scuola Normale Superiore & INFN Pisa (IT))
    27/05/2026, 16:51
    Track 2 - Online and real-time computing
    Oral Presentation

    The High Luminosity LHC (HL-LHC) presents an unprecedented computing challenge, characterized by a pile-up of up to 200 interactions per bunch crossing and extreme data rates. To cope with these conditions, the CMS experiment is replacing its tracking system with a novel Outer Tracker capable of contributing to the Level-1 (L1) Trigger. This upgrade introduces a paradigm shift in data...

    Go to contribution page
  306. Andrew Paul Olivier (Argonne National Laboratory)
    27/05/2026, 16:51
    Track 3 - Offline data processing
    Oral Presentation

    The Deep Underground Neutrino Experiment (DUNE) will deploy four 10 kt fiducial mass liquid argon-based tracking calorimeters to study neutrino oscillation properties, supernova neutrinos, and beyond the standard model physics. To accomplish its diverse physics program, DUNE must read out over 1000 time-samples of waveforms for each of its nearly 400,000 channels. Therefore, a DUNE data...

    Go to contribution page
  307. FNU Mohammad Atif (Brookhaven National Laboratory)
    27/05/2026, 16:51
    Track 6 - Software environment and maintainability
    Oral Presentation

    In the ATLAS experiment, physics reconstruction and validation workflows produce large collections of histograms that must be compared across software versions to detect unexpected changes. Tracing these discrepancies back to their origins in complex codebases like Athena is time consuming and error prone. We present an approach to automate this root-cause analysis by combining vision-enabled...

    Go to contribution page
  308. Rohini Joshi (FHNW), Rohini Joshi
    27/05/2026, 16:51
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The Square Kilometre Array (SKA) telescopes, currently under construction in South Africa and Australia, are due to enter Science Verification at the end of 2026. The SKA Regional Centre Network (SRCNet) is federating distributed, heterogenous regional centres into a coherent global infrastructure to store and process SKAO data. This contribution presents the distributed computing challenges...

    Go to contribution page
  309. Mr Tigran Mkrtchyan (DESY)
    27/05/2026, 16:51
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    POSIX access remains the de facto dominant access mechanism in HPC environments, defining how applications and workflows interact with large-scale storage systems. With its NFSv4.1/pNFS protocol implementation, dCache provides a native integration into HPC environment supporting a large number of scientific applications.

    The recent development efforts in dCache have concentrated on...

    Go to contribution page
  310. Panagiotis Gkonis (CERN)
    27/05/2026, 17:09
    Track 6 - Software environment and maintainability
    Oral Presentation

    CERN’s compute farm must sustain 24/7 operation across thousands of worker nodes, a scale that will further expand for LHC Run 4 and beyond. Faults are frequent, both hardware- and software-related, and while some downtime is acceptable, extended recovery periods lead to measurable loss of throughput and operational efficiency. The existing automation system, based on hard-coded decision...

    Go to contribution page
  311. Andreas Joachim Peters (CERN)
    27/05/2026, 17:09
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    EOS, CERN’s large-scale storage system, is continuously evolving to support increasingly diverse and performance-critical scientific workflows. As part of this evolution, we are considering NFS 4.2 as a strategic new protocol for EOS in order to extend its interoperability, leverage kernel-level client performance, and open a path for community collaboration based on open...

    Go to contribution page
  312. Andrzej Novak (Massachusetts Inst. of Technology (US))
    27/05/2026, 17:09
    Track 9 - Analysis software and workflows
    Oral Presentation

    Weakly-supervised methods in the CWoLa (Classification Without Labels) family enable anomaly searches without truth labels by training classifiers on proxy objectives in data. However, these approaches require high-purity control regions which place assumptions on the signal and in practice are difficult to obtain. In addition, many include a number of disjoint steps, making it difficult to...

    Go to contribution page
  313. Seth Johnson (Oak Ridge National Laboratory (US))
    27/05/2026, 17:09
    Track 5 - Event generation and simulation
    Oral Presentation

    Computational geometry for high energy physics detector simulation is notoriously complex, and indeed it is the primary performance bottleneck in the GPU Monte Carlo codes Celeritas and AdePT.
    Detector descriptions contain millions of distinct physical parts with length scales spanning over five orders of magnitude.
    Electromagnetic physics simulations must contend with curved particle...

    Go to contribution page
  314. Petya Vasileva (University of Michigan (US))
    27/05/2026, 17:09
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    We present a series of case studies analyzing real-world network incidents within the WLCG infrastructure using traceroute and performance data from perfSONAR. Our methodology combines path-based anomaly detection with latency and throughput monitoring to identify routing disruptions, topological changes, and their correlation with performance degradation. The approach highlights common...

    Go to contribution page
  315. Philippe Canal (Fermi National Accelerator Lab. (US))
    27/05/2026, 17:09
    Track 3 - Offline data processing
    Oral Presentation

    Over many years, ROOT users have repeatedly stumbled over—and loudly rediscovered—the infamous 1 GB limit on individual I/O operations, a constraint that somehow survived long past the era when anyone thought a gigabyte was “a lot.” As experiments embraced ever-larger objects and collections, this limit became an increasingly unavoidable rite of passage. This contribution recounts the...

    Go to contribution page
  316. Victoria Tokareva
    27/05/2026, 17:09
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    High-energy, nuclear and astroparticle physics operate at comparable scales of data volume and complexity and face closely related challenges in data preservation, metadata management, and long-term reuse. While these communities have developed robust experiment-specific data curation practices, metadata remains highly specific and heterogeneous, tightly coupled to custom formats, frameworks...

    Go to contribution page
  317. Christian Sonnabend (CERN, Heidelberg University (DE))
    27/05/2026, 17:09
    Track 2 - Online and real-time computing
    Oral Presentation

    The ALICE time projection chamber (TPC) is the main tracking and particle identification device used in the ALICE experiment at CERN. With a 900 GB/s data rate and a fully GPU-based online reconstruction, the online processing is capable of handling even the densest environments of central Pb--Pb interactions at 50 kHz nominal interaction rate (Run 3) and creates an ideal environment for the...

    Go to contribution page
  318. Sergio Andreozzi
    27/05/2026, 17:09
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The SPECTRUM project (https://spectrumproject.eu/), funded under Horizon Europe, presents its final deliverables: the Strategic Research, Innovation and Deployment Agenda (SRIDA) and the Technical Blueprint for a European compute and data continuum serving data-intensive science communities.

    The SRIDA is structured around four pillars encompassing 13 strategic priorities spanning technical...

    Go to contribution page
  319. Holly Szumila-Vance (Florida International University)
    27/05/2026, 17:09
    Track 4 - Distributed computing
    Oral Presentation

    The ePIC collaboration is developing a highly integrated, multi-purpose detector for the upcoming Electron-Ion Collider (EIC). A co-design approach between the detector and the computing enables a seamless data flow from detector readout to physics analysis, using streaming readout and AI. This system is aimed at accelerating scientific discovery and improving measurement precision through...

    Go to contribution page
  320. Manos Vourliotis (Univ. of California San Diego (US))
    27/05/2026, 17:27
    Track 2 - Online and real-time computing
    Oral Presentation

    This talk presents the new baseline strategy for the Phase-2 tracking of the CMS experiment for online event reconstruction, and for the main iteration of offline tracking. This tracking sequence takes advantage of the combination of cutting-edge tracking algorithms that are either optimized for parallel execution on GPUs (Patatrack and LST), or are vectorized for efficient CPU performance...

    Go to contribution page
  321. Andreas Joachim Peters (CERN)
    27/05/2026, 17:27
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    As part of the CERN Storage Group’s technology investigations, we are exploring future-proof, scalable interactive service architectures that meet demanding requirements for performance and maintainability.
    To achieve this, we are focusing on storage solutions that provide Linux-native filesystem access using open, standards-compliant technologies capable of securely supporting tens of...

    Go to contribution page
  322. Siqi Hou
    27/05/2026, 17:27
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The efficient and stable operation of the data processing pipeline is fundamental to the success of primordial gravitational wave telescopes like the Ali CMB Polarization Telescope (AliCPT). However, the management of its heterogeneous computing and hardware ecosystem—servers, virtual machines, storage systems, and the remote observatory environment at the high-altitude site in Tibet—poses a...

    Go to contribution page
  323. Lino Oscar Gerlach (Princeton University (US)), Mohamed Aly (Princeton University (US))
    27/05/2026, 17:27
    Track 9 - Analysis software and workflows
    Oral Presentation

    We present GRAEP (Gradient-based End-to-End Physics Analysis), a JAX-based framework for building modular, end-to-end differentiable analysis pipelines in high-energy physics. The framework integrates tooling from the Scikit-HEP ecosystem and enables gradient-based optimisation across HEP analysis workflows. We demonstrate an end-to-end differentiable analysis applied to CMS Open Data,...

    Go to contribution page
  324. Harris Tzovanakis (CERN)
    27/05/2026, 17:27
    Track 6 - Software environment and maintainability
    Oral Presentation

    INSPIREHEP is evolving toward a new search and discovery platform that combines AI assisted retrieval with a unified service for metadata and content processing. This contribution presents the design and planned deployment of two core components. The first is an AI based retrieval pipeline that enriches records with embeddings, improves ranking behaviour, and supports natural language queries....

    Go to contribution page
  325. Xiaomei Zhang (Chinese Academy of Sciences (CN))
    27/05/2026, 17:27
    Track 4 - Distributed computing
    Oral Presentation

    The Jiangmen Underground Neutrino Observatory (JUNO) commenced physics data taking in August 2025, marking the transition from commissioning to full-scale operation of its Distributed Computing Infrastructure (DCI) system for real physics data. This contribution presents the Monte Carlo production and physics production experience accumulated during the first year of data taking.
    We provide...

    Go to contribution page
  326. Danilo Piparo (CERN)
    27/05/2026, 17:27
    Track 3 - Offline data processing
    Oral Presentation

    In this contribution we discuss the status of the ROOT project right before the LHC Long Shut Down 3.
    We highlight the usage of ROOT by non-LHC communities, for example gravitational waves physics, nuclear physics, neutrino physics as well as experiments at electron colliders. In addition, the usage of ROOT in contexts such as market regulation will be discussed.
    The processes by which the...

    Go to contribution page
  327. Eli Mizrachi (SLAC National Accelerator Laboratory)
    27/05/2026, 17:27
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    LUX-ZEPLIN (LZ) is the world’s most sensitive WIMP dark matter direct-detection experiment, acquiring petabytes of data per year using a dual-phase xenon time projection chamber (TPC) with a seven tonne active mass. User-facing metadata related to TPC conditions and data processing environments are stored in six different SQL and NoSQL databases, which historically were accessed by five...

    Go to contribution page
  328. Andrew Malone Melo (Vanderbilt University (US))
    27/05/2026, 17:27
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Efficient wide-area data transfers are vital for LHC and multi-site scientific workflows, but host-level configuration, encompassing network, storage, and CPU/memory resources, often constrains end-to-end performance. We present the results of a WLCG mini-capability challenge focused on host optimization using modern systems (RHEL 9, 25+ Gbps NICs, NVMe/SSD storage) across seven ATLAS and CMS...

    Go to contribution page
  329. Antonio Linares (CERN)
    27/05/2026, 17:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The CMS Tier-0 system is responsible for the prompt processing and distribution of data collected by the CMS experiment. During Run 3, the LHC delivered almost twice the luminosity of Run 2, while the CMS physics program intensified and diversified year by year, resulting in an average data rate of up to 12 GB/s and a total RAW data volume of 110 PB so far. Higher load places increased...

    Go to contribution page
  330. Artem Petrosyan (Joint Institute for Nuclear Research (RU))
    27/05/2026, 17:45
    Track 4 - Distributed computing
    Oral Presentation

    The SPD (Spin Physics Detector) facility is currently under construction as part of the NICA complex at JINR. In parallel with the physical infrastructure, the experiment’s software ecosystem is being developed to meet the growing need for large-scale simulation of physical processes.

    As an international collaboration, SPD leverages the distributed computing resources contributed by its...

    Go to contribution page
  331. Marian Babik, Marian Babik (CERN)
    27/05/2026, 17:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Demonstrating the distribution of entangled photon pairs is a key step toward large-scale quantum networks, which could interconnect future quantum computers and form the foundation of a quantum internet. A major challenge in long-distance quantum communication is coping with varying conditions in deployed optical fibers. When a classical signal co-propagates with single photons in the same...

    Go to contribution page
  332. FU Shiyuan fusy
    27/05/2026, 17:45
    Track 6 - Software environment and maintainability
    Oral Presentation

    At large-scale scientific facilities such as High Energy Photon Source (HEPS), diverse experimental techniques and detection methods have led to a proliferation of highly specialized data processing software. These tools often feature heterogeneous interfaces and complex parameters, imposing significant cognitive and operational burdens on users, software developers, and technical support...

    Go to contribution page
  333. Hubert Simma (DESY)
    27/05/2026, 17:45
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    In this contribution we report on the re-factoring and re-configuration of main components of the International Lattice Data Grid (ILDG) in order to realize a modern data management framework which is fully FAIR-compliant and has a completely token-based access control.

    ILDG started 20 years ago as an effort of the Lattice QCD community to organize and enable the worldwide sharing of large...

    Go to contribution page
  334. Roger Jones (Lancaster University (GB))
    27/05/2026, 17:45
    Track 5 - Event generation and simulation
    Oral Presentation

    LHC experiments rely on highly complex detector geometries that support multiple phases of the experiment's lifecycle, including engineering design, manufacturing, installation, physics analyses, and outreach. Although the underlying detector components are the same across these tasks, the requirements differ significantly. For example, engineering integration typically needs only the external...

    Go to contribution page
  335. 隗立畅 weilc (IHEP)
    27/05/2026, 17:45
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    With the continuous advancement of HEP detectors and online reconstruction capabilities, the scale of experimental data is growing rapidly. The data pattern is increasingly characterized by "massive small files distributed across multiple data centers." On one hand, the surge in small files creates bottlenecks in metadata and directory operations; on the other hand, cross-data center access...

    Go to contribution page
  336. Juan Miguel Carceller (CERN)
    27/05/2026, 17:45
    Track 3 - Offline data processing
    Oral Presentation

    In this contribution, we highlight several recent developments within Key4hep, the turnkey software stack for future collider studies. These developments cover a variety of topics, most importantly a first stable release of the common event data model format, EDM4hep, and related developments. We have also significantly enhanced the integration with external software packages such as ACTS for...

    Go to contribution page
  337. Federica Piazza (University of Oregon (US))
    27/05/2026, 17:45
    Track 2 - Online and real-time computing
    Oral Presentation

    The instantaneous luminosity
    at the High-Luminosity LHC (HL-LHC) will reach unprecedented levels, boosting the physics reach at the LHC. To cope with the resulting challenging pile-up condition and fully exploit the new high-granularity Inner Tracker (ITk), a major upgrade of the ATLAS
    Trigger and Data Acquisition (TDAQ) system is ongoing, with track reconstruction in the Event Filter...

    Go to contribution page
  338. Tom Runting (Imperial College (GB))
    27/05/2026, 17:45
    Track 9 - Analysis software and workflows
    Oral Presentation

    The Combine tool [1] is a statistical analysis software package developed by the CMS Collaboration for performing measurements and searches in high-energy physics. Originally created for Higgs boson searches and their statistical combination, it has evolved into a comprehensive framework used in the majority of CMS analyses. Built on ROOT and RooFit [2], Combine provides a command-line...

    Go to contribution page
  339. Noemi Calace (CERN)
    28/05/2026, 09:00
    Track 2 - Online and real-time computing
    Plenary Presentation

    The High-Luminosity LHC (HL-LHC) will impose unprecedented demands on event reconstruction, driven by extreme pile-up conditions, increased detector granularity, and stringent latency constraints. In this environment, track reconstruction stands out as one of the most critical and computationally challenging components of future trigger systems, directly impacting physics performance and the...

    Go to contribution page
  340. Massimiliano Galli (Princeton University (US))
    28/05/2026, 09:30
    Track 9 - Analysis software and workflows
    Plenary Presentation

    Statistical inference is a crucial part of HEP analyses. Historically based on RooFit and RooStats, the statistical tools used by the experiments are now facing unprecedented challenges, such as the rapidly growing complexity of statistical models - involving hundreds of parameters of interest and thousands of nuisance parameters - the need for scalable performance in large likelihood...

    Go to contribution page
  341. Chris Burr (CERN)
    28/05/2026, 10:00
    Track 8 - Analysis infrastructure, outreach and education
    Plenary Presentation

    Analysis Productions is a declarative n-tupling service which has processed over 1 exabyte of LHCb data since 2024 with the DIRAC Transformation System. It is the primary method for producing LHCb ntuples for analysis and has produced approximately 50M files.

    Since the start of Run 3 the demand for n-tuples increased dramatically, with 22k samples created in 2025 alone, which led to...

    Go to contribution page
  342. Severin Diederichs (CERN)
    28/05/2026, 11:00
    Track 5 - Event generation and simulation
    Plenary Presentation

    The computational cost of full Monte Carlo simulation in high-energy physics is rapidly increasing, particularly in view of the high-luminosity LHC upgrades. At the same time, modern high-performance computing systems are increasingly based on heterogeneous architectures, motivating efforts to enable full detector simulations on GPUs. The AdePT and Celeritas projects have now accelerated...

    Go to contribution page
  343. Amit Bashyal (Brookhaven National Laboratory), Jacob Calcutt (Brookhaven National Laboratory (US))
    28/05/2026, 11:30
    Track 3 - Offline data processing
    Plenary Presentation

    The Deep Underground Neutrino Experiment (DUNE) will produce a very large amount of raw data from its Far Detector (FD) as it turns on at the start of the next decade: roughly 30 PB/yr from the first two FD modules. DUNE’s current processing paradigm would necessitate a large amount of both disk space and compute time to process this raw data up to the point at which high-level event...

    Go to contribution page
  344. Giovanni Guerrieri (CERN)
    28/05/2026, 12:00
    Track 4 - Distributed computing
    Plenary Presentation

    Large-scale, data-intensive research is no longer exclusive to high energy physics. Astronomy, gravitational wave physics, nuclear physics, and many more scientific fields now face comparable challenges in data management, distributed computing, and virtual research environments. With the technological landscape increasingly evolving towards centralised and specialised facilities, communities...

    Go to contribution page
  345. Sanjiban Sengupta (CERN, University of Manchester)
    28/05/2026, 13:45
    Track 2 - Online and real-time computing
    Oral Presentation

    Machine learning approaches have been widely adopted across several areas of high-energy physics research, including simulations, anomaly detection, and trigger systems. Deploying machine learning in trigger systems requires inference approaches capable of processing data at enormous rates, often on the order of 10–100 thousand events per second while making real-time decisions about which...

    Go to contribution page
  346. Wesley Patrick Kwiecinski (University of Illinois Chicago)
    28/05/2026, 13:45
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Efficient data access is becoming increasingly important for high-energy physics (HEP) workflows on HPC systems. Large datasets, a greater degree of concurrency (multi-process and multithreading), and complex event formats can lead to hidden performance issues. The HEP-CCE/SOP group used the Darshan I/O characterization tool to identify data re-operations in representative HEP workflows, using...

    Go to contribution page
  347. Octavian-Mihai Matei (CERN)
    28/05/2026, 13:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Over the past 70 years, CERN’s pioneering work in particle physics and more than a decade of operations at the Large Hadron Collider (LHC) has driven a dramatic transformation in data storage. With each new experimental run, the scale and complexity of data handling continue to grow. As we approach the next Long Shutdown (LS3) and the High-Luminosity LHC (HL-LHC) era, storage infrastructure...

    Go to contribution page
  348. Richa Sharma (University of Puerto Rico (US))
    28/05/2026, 13:45
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The IRIS-HEP training program and the HEP Software Foundation (HSF), collaborate and co-organize software training events for the high-energy physics community. These activities include hands-on workshops and schools that focus on modern software, computing, and analysis tools. The program addresses the need for both general computational skills and domain-specific knowledge required to...

    Go to contribution page
  349. Eddie Mcgrady (University of Notre Dame (US))
    28/05/2026, 13:45
    Track 9 - Analysis software and workflows
    Oral Presentation

    Neural Simulation-Based Inference (NSBI) is an analysis technique which leverages the output of trained deep neural networks (DNNs) to construct a surrogate likelihood ratio which can then be used for a binned or unbinned likelihood scan. These techniques have show some success when applied to analyses involving effective field theory (EFT) approaches, where it can be difficult to achieve...

    Go to contribution page
  350. Mr Thammarat Yawisit (King Mongkut's Institute of Technology Ladkrabang)
    28/05/2026, 13:45
    Track 2 - Online and real-time computing
    Oral Presentation

    Large-scale neutrino observatories operate under unavoidable detector deadtime arising from photomultiplier saturation, digitizer limits, and front-end readout constraints. Conventional coincidence-based trigger logic implicitly assumes continuous sensor availability and therefore suffers systematic efficiency loss when channels become temporarily non-live. This work presents the design of a...

    Go to contribution page
  351. Fabrice Le Goff (University of Oregon (US))
    28/05/2026, 13:45
    Track 3 - Offline data processing
    Oral Presentation

    During the last ten years the detector agnostic open source track reconstruction toolkit ACTS has matured to production level quality and is used in offline data taking in ATLAS, sPHENIX, FASER, and is part of many upgrade and feasibility studies within the community at large. For ATLAS, the ACTS based track reconstruction has surpassed the legacy setup for the predicted Phase-2 performance in...

    Go to contribution page
  352. František Stloukal (CERN)
    28/05/2026, 13:45
    Track 5 - Event generation and simulation
    Oral Presentation

    At the HL-LHC, computing demands, particularly for event generation, will reach an unprecedented volume for which simple scaling of current resources will be insufficient, requiring new algorithmic and architectural strategies to sustain performance within economic and energy constraints.

    A particularly promising approach is to identify parts of the simulation workflow that can be safely...

    Go to contribution page
  353. CMS Collaboration
    28/05/2026, 13:45
    Track 4 - Distributed computing
    Oral Presentation

    For a few years INFN has been investing effort in exploring technologies to seamlessly integrate distributed resources to effectively enable high-rate data analysis patterns supporting interactive and/or quasi-interactive analysis of sizable amounts of data. One of the main drivers for this initiative is to contribute to the R&D activities for the evolution of the analysis computing model for...

    Go to contribution page
  354. Pierfrancesco Cifra (CERN)
    28/05/2026, 14:03
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Data centers play a key role in High Energy Physics (HEP) experiments, as there is the need to collect, process, and store large quantities of data. Given the scale and complexity of those computing infrastructures, it is not trivial to spot failures of any nature. Traditional rule-based monitoring systems work well, but they might struggle in large, heterogeneous, and dynamic environments. It...

    Go to contribution page
  355. Anna Zaborowska (CERN)
    28/05/2026, 14:03
    Track 3 - Offline data processing
    Oral Presentation

    We present the first full release of ColliderML, a large-scale, fully simulated benchmark dataset for algorithm R&D and development, as well as machine-learning applications.
    It is built on top of the OpenDataDetector (ODD) under high-luminosity collider conditions (ColliderML). ODD comprises a set of subsystems that are representative of future collider experiments like at the...

    Go to contribution page
  356. Doug Benjamin (Brookhaven National Laboratory (US)), Douglas Benjamin
    28/05/2026, 14:03
    Track 4 - Distributed computing
    Oral Presentation

    High energy physics (HEP) workflows are approaching the throughput limits of traditional grid/HTC computing, as LHC and DUNE are driving O(10–100)× data growth and increased GPU demand. This motivates a practical path to routinely use leadership-class HPC resources remotely. One of the challenges is the varied authentication, authorization and job submission mechanisms at different HPC...

    Go to contribution page
  357. Itay Horin
    28/05/2026, 14:03
    Track 5 - Event generation and simulation
    Oral Presentation

    We introduce FANG (Focused Angular $N$-body event Generator), a new Monte Carlo tool for efficient event generation in restricted Lorentz-invariant phase space (LIPS). Unlike conventional approaches that uniformly sample the full $4\pi$ solid angle, FANG directly generates events in which selected final-state particles are constrained to fixed directions or finite angular regions in the...

    Go to contribution page
  358. Kenneth Rioja (CERN)
    28/05/2026, 14:03
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The HEP Training platform is a new online registry designed to facilitate the discovery and dissemination of HEP-related training materials and events across high energy physics experiments, labs and universities. Students, researchers, and educators can have access to a list of curated resources – such as tutorials, guidelines, workshops and training events. These resources are links...

    Go to contribution page
  359. Jay Ajitbhai Sandesara (University of Wisconsin Madison (US))
    28/05/2026, 14:03
    Track 9 - Analysis software and workflows
    Oral Presentation

    Neural Simulation-Based Inference (NSBI) is a family of emerging techniques that allow statistical inference using high-dimensional data, even when the exact likelihoods are analytically intractable. The techniques rely on leveraging deep learning to directly build likelihood-based or posterior-based inference models using high-dimensional information. By not relying on hand-crafted,...

    Go to contribution page
  360. Gianmaria Del Monte (CERN)
    28/05/2026, 14:03
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    As the scale and complexity of high-energy physics computing grows, storage systems are being pushed to serve radically diverse workloads at once, often with significant performance consequences. To ensure EOS can meet these evolving demands, we introduce a real-time I/O traffic-shaping framework that monitors ongoing I/O patterns and dynamically adjusts and balance read/write flows to...

    Go to contribution page
  361. Tarik Ourida
    28/05/2026, 14:03
    Track 2 - Online and real-time computing
    Oral Presentation

    Current deep learning based models at the LHC produce deterministic point estimates without any accompanying measure of epistemic uncertainty. Without this information, the system cannot determine when its predictions may be unreliable, particularly in rare or weakly sampled regions of feature space. This work introduces a high performance Bayesian Neural Network architecture for the Level-1...

    Go to contribution page
  362. Cilicia Uzziel Perez (La Salle, Ramon Llull University (ES)), Irvin Jadurier Umana Chacon (Consejo Nacional de Rectores (CONARE) (CR))
    28/05/2026, 14:03
    Track 2 - Online and real-time computing
    Oral Presentation

    Graph Neural Networks (GNNs) excel at modeling the complex, irregular geometry of modern calorimeters, but their computational cost poses challenges for real-time or resource-constrained environments. We present lightweight, attention-enhanced GNNs built on node-centric GarNet layers, which eliminate costly edge message passing and provide learnable, permutation-invariant aggregation optimized...

    Go to contribution page
  363. Felix Schlepper (CERN, Heidelberg University (DE))
    28/05/2026, 14:21
    Track 3 - Offline data processing
    Oral Presentation

    In ALICE, LHC Run 3 marks a major step toward GPU-centric data processing.
    During the synchronous (online) phase, GPUs are fully dedicated to Time Projection Chamber reconstruction and compression. During the asynchronous (offline) phase, additional reconstruction tasks can be offloaded to GPUs to improve overall computing efficiency and throughput.

    We report the porting of the ITS2...

    Go to contribution page
  364. Jessica Prendi (ETH Zurich (CH))
    28/05/2026, 14:21
    Track 2 - Online and real-time computing
    Oral Presentation

    The Next Generation Triggers (NGT) initiative in CMS aims to enable the processing of all Level-1 Trigger accepted collisions for the HL-LHC. Central to this effort is the expansion of the High-Level Trigger (HLT) data scouting strategy, where events are reconstructed and stored in an analysis-ready format. This necessitates an in situ processing loop to derive high-quality calibration...

    Go to contribution page
  365. Ozgur Ozan Kilic (Brookhaven National Laboratory)
    28/05/2026, 14:21
    Track 4 - Distributed computing
    Oral Presentation

    As HEP experiments increasingly rely on diverse computing resources across multiple facilities, sustainable workflow orchestration that bridges experiment-native tools with facility-native interfaces becomes critical. This work develops and evaluates a generalizable approach to cross-facility workflow integration, using the DUNE 2×2 Near Detector simulation as a challenging demonstrator case....

    Go to contribution page
  366. Felice Pantaleo (CERN)
    28/05/2026, 14:21
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Efficient data processing using machine learning relies on heterogeneous computing approaches, but optimizing input and output data movements remains a challenge. In GPU-based workflows data already resides on GPU memory, but machine learning models requires the input and output data to be provided in specific tensor format, often requiring unnecessary copying outside of the GPU device and...

    Go to contribution page
  367. Andrea Valassi (CERN)
    28/05/2026, 14:21
    Track 5 - Event generation and simulation
    Oral Presentation

    Physics event generators are essential components of the simulation software chain of HEP experiments, providing theoretical predictions against which experimental data are compared. In the LHC experiments, the simulation of QCD physics processes at the Next-to-Leading-Order (NLO) or beyond is essential to reach the level of accuracy required. However, a distinctive feature of QCD NLO...

    Go to contribution page
  368. Mr Thammarat Yawisit (King Mongkut's Institute of Technology Ladkrabang)
    28/05/2026, 14:21
    Track 2 - Online and real-time computing
    Oral Presentation

    Large-scale neutrino observatories operate under unavoidable detector deadtime and signal pile-up, leading to systematic inefficiencies in conventional coincidence-based trigger systems. Such triggers typically rely on binary temporal windows and assume continuous sensor availability, causing partial or complete loss of correlated signal information during non-live intervals. We introduce...

    Go to contribution page
  369. Ethan Lee
    28/05/2026, 14:21
    Track 9 - Analysis software and workflows
    Oral Presentation

    Recent anomalies in flavour observables have motivated renewed interest in precision measurements of semileptonic $B$-meson decays as a probe of possible physics beyond the Standard Model. Extracting such effects often requires fitting complex, high-dimensional datasets in which traditional likelihood-based methods become computationally challenging or intractable. Simulation-based inference...

    Go to contribution page
  370. CMS Collaboration
    28/05/2026, 14:21
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Over the last years the landscape of distributed resources used by the CMS experiment has changed significantly. In the past, dedicated compute resources were essentially based on (pledged) x86-CPU installed at classical Grid sites. Nowadays other CPU architectures such as ARM and accelerators like GPUs have become common resources also thanks to the non-Grid opportunistic centres such as HPCs...

    Go to contribution page
  371. Danilo Piparo (CERN)
    28/05/2026, 14:21
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    Since 2024, the ROOT team has started a modernisation campaign of the ROOT software trainings as well as of dedicated ROOT tutorials available online on our website. Collectively, we have trained more than 700 people, including newcomers and experienced users wanting to dive into the newest features. We taught in person at CERN and at the Users Workshop in Valencia, and online during the...

    Go to contribution page
  372. Mr Akshat Gupta
    28/05/2026, 14:39
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The petabyte-scale data generated annually by High Energy Physics (HEP) experiments like those at the Large Hadron Collider present a significant data storage challenge. Whilst traditional algorithms like LZMA and ZLIB are widely used, they often fail to exploit the deep structure inherent in scientific data. We investigate the application of modern state space models (SSMs) to this problem,...

    Go to contribution page
  373. Lauren Meryl Hay (SUNY Buffalo), Rishabh Jain (Brown University (US))
    28/05/2026, 14:39
    Track 5 - Event generation and simulation
    Oral Presentation

    As the accuracy of experimental results increase in high energy physics, so too must the precision of Monte Carlo simulations. Currently, event generation at next to leading order (NLO) accuracy in QCD and beyond results in the production of negatively-weighted events. The presence of these weights increases strain on computational resources by degrading the statistical power of MC samples,...

    Go to contribution page
  374. Valentin Volkl (CERN)
    28/05/2026, 14:39
    Track 4 - Distributed computing
    Oral Presentation

    The CernVM-Filesystem (CVMFS) is a global, read-only, on-demand filesystem optimized for software distribution. Its on-demand nature is well adapted and extremely efficient for distributed batch computing, but can mean noticeable latency in interactive use, especially when working with applications such as python that load a large number of small file on startup.

    In this contribution we...

    Go to contribution page
  375. Jonas Rembser (CERN)
    28/05/2026, 14:39
    Track 9 - Analysis software and workflows
    Oral Presentation

    Neural Simulation-Based Inference (NSBI) enables efficient use of complex generative models in statistical analyses, outperforming template histogram methods in particular for high-dimensional problems. When augmented with gradient information, NSBI can both maximise sensitivity to new physics and reduce the required amount of simulation.
    The integration of NSBI into established...

    Go to contribution page
  376. Alexandr Prozorov
    28/05/2026, 14:39
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The ePIC experiment at the future Electron-Ion Collider relies on a rapidly evolving software ecosystem for simulation, reconstruction, physics analysis and detector support. As the collaboration grows, enabling users to efficiently discover, learn, and develop software tools has become increasingly important. The ePIC User Learning working group addresses this challenge by developing training...

    Go to contribution page
  377. Mr Zhenyuan Wang (Computing center, Institute of High Energy Physics, CAS, China)
    28/05/2026, 14:39
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    With the escalating processing demands of modern high-energy physics experiments, traditional monitoring tools are faltering under the dual pressures of cumbersome deployment and coarse-grained observability in high-throughput production environments. JobLens is a lightweight, one-click-deployable data collector designed to deliver fine-grained, job-level observability for HEP workloads. Its...

    Go to contribution page
  378. Giacomo De Pietro (Karlsruhe Institute of Technology)
    28/05/2026, 14:39
    Track 3 - Offline data processing
    Oral Presentation

    High levels of beam-induced detector noise and detector aging degrade track-finding performance in the Belle II central drift chamber, resulting in losses of both track finding efficiency and purity. This motivates the development of reconstruction approaches capable of maintaining robust performance under deteriorating detector conditions. Building on our earlier work on an end-to-end...

    Go to contribution page
  379. Izaac Sanderswood (Univ. of Valencia and CSIC (ES)), Volodymyr Svintozelskyi (Univ. of Valencia and CSIC (ES))
    28/05/2026, 14:39
    Track 2 - Online and real-time computing
    Oral Presentation

    The reconstruction of particle decays inside LHCb’s dipole magnet region enables novel measurements of hyperon decays and sensitive searches for long-lived particles with lifetimes above 100 ps, relevant both to the Standard Model and to many of its extensions. Reconstructing such displaced vertices using only track segments in LHCb’s outermost tracker (SciFi) is challenging due to limited...

    Go to contribution page
  380. Zeta Sourpi (Universite de Geneve (CH))
    28/05/2026, 14:39
    Track 2 - Online and real-time computing
    Oral Presentation

    The ALICE (A Large Ion Collider Experiment) is a general-purpose heavy-ion detector at the CERN Large Hadron Collider (LHC) that operates at interaction rates producing raw data streams of O(TB/s). Due to these data volumes, an online reconstruction is performed to achieve a compressed representation of the continuous data stream. Given the lossy nature of this process, early assessment of...

    Go to contribution page
  381. Fengping Hu (University of Chicago (US))
    28/05/2026, 14:57
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    We present the development and user experience of a hosted BinderHub service that delivers a scalable, uniform, and reproducible computing environment for training sessions and workshops. The IRIS-HEP Scalable Systems Laboratory operates an enhanced, Kubernetes-based BinderHub platform for HEP training and analysis, extending the upstream project with GPU support, guaranteed CPU and memory...

    Go to contribution page
  382. Bralyne Matoukam (University of the Witwatersrand)
    28/05/2026, 14:57
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The ATLAS experiment at the CERN Large Hadron Collider (LHC) records and processes large amounts of data from proton-proton collisions. With the upcoming High-Luminosity LHC (HL-LHC), the data volume is expected to increase by more than an order of magnitude, posing new challenges for storage, data throughput, and analysis scalability.
    Currently, all major production output formats support...

    Go to contribution page
  383. Kylian Schmidt (KIT - Karlsruhe Institute of Technology (DE))
    28/05/2026, 14:57
    Track 9 - Analysis software and workflows
    Oral Presentation

    Neural Simulation Based Inference (NSBI) has emerged as a powerful statistical inference methodology for large datasets with high-dimensional representations. NSBI methods rely on neural networks to estimate the underlying, multi-dimensional likelihood distributions of the data at a per-event level. This approach significantly improves the inference performance over classical binned approaches...

    Go to contribution page
  384. Roope Oskari Niemi
    28/05/2026, 14:57
    Track 2 - Online and real-time computing
    Oral Presentation

    We present PQuantML, an open-source library for end-to-end hardware-aware model compression that enables the training and deployment of compact, high-performance neural networks on resource-constrained hardware in physics and beyond. PQuantML abstracts away the low-level details of compression by letting users compress models with a simple configuration file and an API call. It enables the use...

    Go to contribution page
  385. Marta Bertran Ferrer (CERN)
    28/05/2026, 14:57
    Track 4 - Distributed computing
    Oral Presentation

    Effective tools for monitoring Grid workflow executions are crucial for the prompt identification of issues, which in turn facilitates the design and deployment of appropriate solutions. The ALICE Grid middleware JAliEn utilizes the MonALISA framework to monitor all its Grid components, which collectively generate an enormous amount of data - about 200,000 monitored parameters per second...

    Go to contribution page
  386. Aleksandr Svetlichnyi (INR RAS, MIPT(NRU))
    28/05/2026, 14:57
    Track 5 - Event generation and simulation
    Oral Presentation

    The Geant4 toolkit is widely used for modelling light-nuclei beam fragmentation in human tissue and other radiological studies (see, for example, [1]). Precise and fast modelling of secondary fragments resulting from beam fragmentation in tissue is vital for studying the radiobiological effects of heavy ion therapy [1]. Short $^{16}$O–$^{16}$O and $^{20}$Ne–$^{20}$Ne runs have been conducted...

    Go to contribution page
  387. Jared Little (Indiana University (US))
    28/05/2026, 14:57
    Track 2 - Online and real-time computing
    Oral Presentation

    The ATLAS level-1 calorimeter trigger is a custom-built hardware system

    that identifies events containing calorimeter-based physics objects,

    including electrons, photons, taus, jets, and total and missing transverse energy.

    In Run 3, L1Calo has been upgraded to process higher granularity

    input data. The new trigger comprises several FPGA-based feature extractor modules,

    which...

    Go to contribution page
  388. Natalia Diana Szczepanek (CERN)
    28/05/2026, 14:57
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The Worldwide LHC Computing Grid (WLCG) provides the distributed infrastructure necessary to support both LHC and non-LHC experiments; however, the corresponding rise in energy usage presents new challenges, in particular with the upcoming HL-LHC era, where computing requirements will continue to expand significantly.
    Therefore monitoring power consumption has become increasingly important,...

    Go to contribution page
  389. Adriano Di Florio (CC-IN2P3)
    28/05/2026, 14:57
    Track 3 - Offline data processing
    Oral Presentation

    The upcoming upgrades to the Large Hadron Collider for the HL-LHC era will progressively increase the nominal luminosity, aiming to a reach peak value of $5×10^{34} $ cm$^{-2}$ s$^{-1}$ for the ATLAS and CMS experiments. Higher luminosity will naturally lead to a larger number of proton–proton interactions occurring in the same bunch crossing, with pileup levels that may reach up to 200,...

    Go to contribution page
  390. Raulian-Ionut Chiorescu, Ricardo Rocha (CERN)
    28/05/2026, 16:15
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    High Energy Physics (HEP) computing at CERN has long relied on interactive SSH environments, shared software stacks and large-scale batch systems. As workloads increasingly adopt containerized and accelerator-driven execution models, a key requirement is to provide a consistent user interface while enabling modern orchestration platforms.

    This contribution presents the computing platform...

    Go to contribution page
  391. Sanjiban Sengupta (CERN, University of Manchester)
    28/05/2026, 16:15
    Track 3 - Offline data processing
    Oral Presentation

    Deploying machine learning models in environments with high-throughput, low-latency, and strict memory constraints is challenging, especially when these environments evolve rapidly and require simplified user-control, dependency management, and long-term maintainability. In high-energy physics, and particularly within the Trigger Systems of major LHC experiments, similar requirements arise for...

    Go to contribution page
  392. Cristiano Fanelli (William & Mary)
    28/05/2026, 16:15
    Track 5 - Event generation and simulation
    Oral Presentation

    Artificial Intelligence (AI) is poised to play a central role in the design and optimization of complex, large-scale detectors, such as the future ePIC experiment at the Electron-Ion Collider (EIC), an international next-generation QCD facility in the United States.
    The ePIC experiment consists in an integrated detector comprising a central apparatus complemented by forward and backward...

    Go to contribution page
  393. Carlos Borrajo Gomez (CERN)
    28/05/2026, 16:15
    Track 4 - Distributed computing
    Oral Presentation

    As part of the Run 3 of the Large Hadron Collider (LHC), the CMS experiment generates large amounts of data that have to be processed and stored efficiently. The complex distributed computing infrastructure used for these purposes has to be highly available, and having a reliable and comprehensive monitoring setup is essential for it. The CMS monitoring team is responsible for providing the...

    Go to contribution page
  394. Jose Maria Benlloch Rodriguez (Donostia International Physics Center (DIPC) (ES))
    28/05/2026, 16:15
    Track 2 - Online and real-time computing
    Oral Presentation

    The Neutrino Experiment with a Xenon TPC (NEXT) investigates neutrinoless double-beta decay (0νββ) in xenon using high-pressure xenon time projection chambers. This approach enables excellent energy resolution and allows for the 3D reconstruction of the track, improving the sensitivity using the topological information.

    Previous prototypes of the NEXT experimental programme were using DATE...

    Go to contribution page
  395. Dr Eirik Gramstad (University of Oslo (NO))
    28/05/2026, 16:15
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The ATLAS Open Data for Outreach and Education were transformed in 2025, with an entirely new release featuring new (public) ntuple-making infrastructure, and myriad new notebook examples demonstrating everything from fundamental HEP concepts to complex analyses. The focus of the overhaul has been on simplifying the user experience: with just a few clicks, anyone can make a plot from the Open...

    Go to contribution page
  396. Stephan Hageboeck (CERN)
    28/05/2026, 16:15
    Track 9 - Analysis software and workflows
    Oral Presentation

    Two years before the start of the High-Luminosity LHC, the ROOT project will evolve to its 7th release cycle. This contribution will explain ROOT's release schedule, and discuss new features being developed for ROOT 7 such as RFile or a high-performance histogram package to support concurrent filling. ROOT 7 is also planned to introduce a change in ROOT's object ownership model, allowing for...

    Go to contribution page
  397. Ruslan Mashinistov (Brookhaven National Laboratory (US))
    28/05/2026, 16:15
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The HSF Conditions Database (CDB) is a community-driven solution for managing conditions data - non-event data required for event processing - which present common challenges across HENP and astro-particle experiments. In the three years of production operation for sPHENIX at BNL, where the HSF CDB supports over 70,000 concurrent jobs on a farm running 132,000 logical cores, it has evolved...

    Go to contribution page
  398. Marcos Vinicius Silva Oliveira (Brookhaven National Laboratory (US))
    28/05/2026, 16:15
    Track 2 - Online and real-time computing
    Oral Presentation

    The ATLAS experiment at CERN is constructing upgraded system
    for the "High Luminosity LHC", with collisions due to start in
    2030. In order to deliver an order of magnitude more data than
    previous LHC runs, 14 TeV protons will collide with an instantaneous
    luminosity of up to 7.5 x 10e34 cm^-2s^-1, resulting in much higher pileup and
    data rates than the current experiment was designed to...

    Go to contribution page
  399. Davide Valsecchi (ETH Zurich (CH))
    28/05/2026, 16:33
    Track 5 - Event generation and simulation
    Oral Presentation

    Scale factors derived from Tag & Probe measurements are essential for correcting detector effects in CMS simulation. However, traditional binned methods fail to capture continuous kinematic evolutions and require time-consuming manual tuning that becomes unmanageable as dimensions increase. To address this, we present a novel unbinned, multivariate Tag&Probe strategy implemented in PyTorch. By...

    Go to contribution page
  400. Andrea Formica (Université Paris-Saclay (FR))
    28/05/2026, 16:33
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    The ATLAS experiment is redesigning its Conditions database infrastructure in preparation for Run 4. The new system (CREST - Conditions REST) adopts a multi-tier architecture in which interactions with all databases including the Trigger physics configuration database are mediated through a web-based server layer using a REST API. The data caching is provided via Varnish HTTP proxies. We...

    Go to contribution page
  401. Vakho Tsulaia (Lawrence Berkeley National Lab. (US))
    28/05/2026, 16:33
    Track 3 - Offline data processing
    Oral Presentation

    To address this challenge and prepare for the transition to large, resource-intensive ML models, we propose leveraging AthenaTriton for DAOD production, where these ML models are executed on dedicated computing resources. AthenaTriton is a tool for running ML inference as a service in Athena using the NVIDIA Triton server software.We discuss different deployment strategies for Triton servers...

    Go to contribution page
  402. Panos Paparrigopoulos (CERN)
    28/05/2026, 16:33
    Track 4 - Distributed computing
    Oral Presentation

    The WLCG infrastructure is evolving to support the HL-LHC, requiring greater capacity and increasingly diverse resource types, which challenges the existing accounting system to become more flexible in handling heterogeneous resources such as GPUs and in incorporating new metrics, including environmental and sustainability indicators. The current system relies on outdated and overly complex...

    Go to contribution page
  403. Suyog Shrestha (Washington College (US))
    28/05/2026, 16:33
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    This contribution presents a scalable and replicable model to engage high-school and undergraduate students with real-world high energy physics (HEP) computing and analysis. At Washington College, we have integrated hands-on analysis of LHC data into both curricular and co-curricular settings. With support from the NSF LEAPS-MPS program, we organize annual workshops for high school students...

    Go to contribution page
  404. Aaron Jomy (CERN)
    28/05/2026, 16:33
    Track 9 - Analysis software and workflows
    Oral Presentation

    The ROOT Python interfaces are a cornerstone of HENP analysis workflows, enabling rapid development while retaining access to high-performance C++ code. In this contribution, we present a major upcoming update to the backend powering the dynamic C++ bindings generation, based on the new CppInterOp library.
    For ROOT users, this migration translates directly into a better experience: faster...

    Go to contribution page
  405. Mr Robert-Mihai Amarinei (University of Toronto (CA))
    28/05/2026, 16:33
    Track 2 - Online and real-time computing
    Oral Presentation

    The Deep Underground Neutrino Experiment (DUNE) is an international next-generation project that will use a powerful neutrino beam produced at Fermilab and two detectors: a near detector at Fermilab and a far detector ~1300 kilometers away, at the Sanford Underground Research Facility in South Dakota. DUNE features a high-throughput, modular data acquisition system (DAQ) specifically designed...

    Go to contribution page
  406. Deniz Tuana Ergonul
    28/05/2026, 16:33
    Track 2 - Online and real-time computing
    Oral Presentation

    The Deep Underground Neutrino Experiment (DUNE) is a long-baseline neutrino physics experiment with detectors located 1.5 km underground at the Sanford Underground Research Facility. The Data Acquisition (DAQ) system interfaces with multiple front-end electronics, each producing data with distinct rates and formats, and handles the reception, transportation, and preparation of this data for...

    Go to contribution page
  407. Dr Vikas Singhal (Department of Atomic Energy (IN))
    28/05/2026, 16:33
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The GRID Computing Facility, VECC, has been operational for the last two decades. It comprises "Kolkata tier-2 for ALICE" and a "grid-peer tier-3 cluster" for the Indian collaborating Institutes. This is the only computing tier-2 in India for the ALICE CERN experiment under the WLCG umbrella. In this article we will describe how the GRID Computing Facility at VECC evolved and piece by piece...

    Go to contribution page
  408. Anuj Raghav (University of Delhi (IN))
    28/05/2026, 16:51
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    The discovery of the Higgs boson by the ATLAS & CMS collaborations at the Large Hadron Collider (LHC) stands as a monumental achievement in particle physics. While the theoretical underpinnings of the Higgs mechanism are widely taught at the university level and substantial data sets have been made publically available, the practical complexities of experimental data analysis, ranging from...

    Go to contribution page
  409. Lia Lavezzi (INFN Torino (IT)), Lia Lavezzi
    28/05/2026, 16:51
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The Einstein Telescope, the third generation ground-based interferometer for gravitational wave detection, will observe a sky volume one thousand times larger than the second generation interferometers. This will be reflected in a higher observation rate. The physics information contained in the “strain” time series will increase, while on the machine side the size of the raw data from the...

    Go to contribution page
  410. Marta Bertran Ferrer (CERN)
    28/05/2026, 16:51
    Track 4 - Distributed computing
    Oral Presentation

    The ALICE Grid incorporates a large volume of heterogeneous resources, including systems with a diverse range of CPU and GPU resources, various operating system versions, and differing hardware architectures. The Central Grid Operation team lacks direct access to the individual clusters and nodes that compose the Grid, which presents numerous challenges to fully understanding and optimizing...

    Go to contribution page
  411. Jay Chan (Lawrence Berkeley National Lab. (US))
    28/05/2026, 16:51
    Track 3 - Offline data processing
    Oral Presentation

    The High-Luminosity LHC (HL-LHC) will impose unprecedented pile-up and throughput demands on the ATLAS offline tracking reconstruction, making computational efficiency an essential requirement alongside physics performance. We present a comprehensive study of the ATLAS GNN4ITk offline track-reconstruction pipeline, spanning graph construction, Graph Neural Network (GNN) inference, and track...

    Go to contribution page
  412. Yue Sun (The Institute of High Energy Physics of the Chinese Academy of Science)
    28/05/2026, 16:51
    Track 9 - Analysis software and workflows
    Oral Presentation

    In High Energy Physics (HEP), the demand for high-quality andefficient code is essential for data processing and analysis. However, Large Language Models (LLMs), while proficient in general programming, exhibit significant inaccuracies when generating specialized HEP code, reflected in a high failure rate. At the same time, a more complex offline software system will be necessary to adapt to...

    Go to contribution page
  413. Ilija Vukotic (University of Chicago (US))
    28/05/2026, 16:51
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Efficient access to Conditions data is critical for data processing in the ATLAS experiment at the LHC. For more than a decade, Squid HTTP proxies deployed across distributed computing sites have provided low-latency access, reduced WAN bandwidth consumption, and protected origin servers from excessive load. Conditions data traffic is characterized by exceptionally high request rates - often...

    Go to contribution page
  414. Matthias Schott (CERN / University of Mainz)
    28/05/2026, 16:51
    Track 5 - Event generation and simulation
    Oral Presentation

    Accurate Monte Carlo (MC) modelling of high-energy physics (HEP) data remains a central challenge, especially when simulated distributions fail to reproduce observations. Traditional remedies rely on reweighting individual observables to data, an approach that is effective when only one or two dimensions exhibit discrepancies. However, for N correlated observables with N > 2, conventional...

    Go to contribution page
  415. Wenxing Fang
    28/05/2026, 16:51
    Track 2 - Online and real-time computing
    Oral Presentation

    The Jiangmen Underground Neutrino Observatory (JUNO) is a large-scale neutrino experiment with multiple physics goals. After many years of dedicated effort, the construction of the JUNO detector has been successfully completed, and physics data-taking officially commenced on August 26, 2025.
    The detector readout system produces waveform data at a rate of approximately 40 GB/s at a 1 kHz...

    Go to contribution page
  416. Dirk Hutter (Goethe University Frankfurt (DE))
    28/05/2026, 16:51
    Track 2 - Online and real-time computing
    Oral Presentation

    The CBM First-Level Event Selector (FLES) serves as the central data processing and event selection system for the upcoming CBM experiment at FAIR. Designed as a scalable high-performance computing cluster, it facilitates online event reconstruction and selection of unfiltered physics data at rates surpassing 1 TByte/s. The FLES input data originates from approximately 5000 detector links,...

    Go to contribution page
  417. Lukasz Graczykowski (Warsaw University of Technology (PL))
    28/05/2026, 17:09
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    Title: ALICE Event Display - lessons learned and future enhancements
    Authors: Julian Myrcha on behalf of the ALICE collaboration
    Affiliations: Warsaw University of Technology

    After two years of continuous development and operation, several lessons have been learned that have led to substantial improvements in the ALICE event visualization system. The current solution allows...

    Go to contribution page
  418. Maksim Melnik Storetvedt (Western Norway University of Applied Sciences (NO))
    28/05/2026, 17:09
    Track 4 - Distributed computing
    Oral Presentation

    The ALICE Collaboration actively relies on accelerators, such as GPUs, to handle increasingly complex workflows and data rates. Such resources have rapidly risen in importance across a number of usecases, and their emergence can be reflected in their availability in the WLCG. Through broader vendor support, as well as improved matching techniques, the ALICE Grid middleware may allocate and use...

    Go to contribution page
  419. Martin Øines Eide (Western Norway University of Applied Sciences (NO))
    28/05/2026, 17:09
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    Authors:
    - Martin Øines Eide, Western Norway University of Applied Sciences,
    University of Bergen, Bergen, Norway and European Organization for
    Nuclear Research (CERN), Geneva, Switzerland
    - Costin Grigoras, European Organization for Nuclear Research (CERN), Geneva,
    Switzerland
    on behalf of the ALICE collaboration


    The ALICE experiment at CERN relies on a...

    Go to contribution page
  420. Gordon Watts (University of Washington (US))
    28/05/2026, 17:09
    Track 9 - Analysis software and workflows
    Oral Presentation

    Large Language Models (LLMs) can serve as connective elements within ATLAS analysis workflows, linking data-discovery utilities, columnar data-delivery systems, and analysis-level plotting frameworks. Building on earlier exploratory studies of LLM-generated plotting code, we now focus on an implementable architecture suitable for real use. The system is decomposed into reusable Model Context...

    Go to contribution page
  421. Emidio Maria Giorgio (INFN LNS)
    28/05/2026, 17:09
    Track 2 - Online and real-time computing
    Oral Presentation

    The KM3NeT neutrino detectors, currently under construction in the Mediterranean Sea, are designed to measure high-energy cosmic neutrinos and their properties. To exploit the Cherenkov effect as the detection technique, the ARCA and ORCA detectors are deployed at two abyssal sites, off the coasts of southern Italy and France, respectively. Operating in such an extreme deep-sea environment,...

    Go to contribution page
  422. Nathan Jihoon Kang (Argonne National Laboratory (US))
    28/05/2026, 17:09
    Track 3 - Offline data processing
    Oral Presentation

    Efficient and maintainable in-file metadata is crucial for large-scale event processing. The ATLAS experiment's Athena event-processing framework relies on complex navigational and metadata infrastructure to manage event processing across diverse workflows. As experimental demands grow, inefficiencies and redundancies in the current metadata infrastructure have constrained storage efficiency,...

    Go to contribution page
  423. Dr Arsenii Gavrikov
    28/05/2026, 17:09
    Track 5 - Event generation and simulation
    Oral Presentation

    The Jiangmen Underground Neutrino Observatory (JUNO) is a next-generation neutrino experiment located in China. To achieve its main objectives, the experiment demands highly accurate Monte Carlo (MC) simulations. These simulations must describe the complex response of the 20-kton liquid scintillator target within a 35.4 m diameter acrylic sphere, which is monitored by thousands of...

    Go to contribution page
  424. Tobias Winchen (Max Planck Institute for Radio Astronomy)
    28/05/2026, 17:09
    Track 2 - Online and real-time computing
    Oral Presentation

    The Effelsberg Direct Digitization (EDD) backend is a multi-science computing system for real-time processing of data from radio telescopes on commercial-off-the-shelf computing hardware. While originally developed for the Effelsberg 100-m telescope, its has been generalized into an open-source framework that currently drives data recording at four independent telescopes, including single...

    Go to contribution page
  425. Sergiu Weisz (National University of Science and Technology POLITEHNICA Bucharest (RO))
    28/05/2026, 17:09
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    Hosted by the National University of Science and Technology POLITEHNICA Bucharest, the RO-03-UPB site has been an active member of the WLCG computing Grid since 2017 and a member of the ALICE Grid since 2005. Over the course of this collaboration, the site has evolved significantly: originally deployed as a Tier-2 facility, it has grown into a major contributor to the ALICE Grid, currently...

    Go to contribution page
  426. Gerhard Immanuel Brandt (Bergische Universitaet Wuppertal (DE))
    28/05/2026, 17:27
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    In its high luminosity phase, the Large Hadron Collider (LHC) will
    achieve unprecedented levels of instantaneous luminosity
    of up to $7.5\times10^34 $cm$^2$s$^{-1}$, which exposes the ITk (Inner Tracker) Pixel
    detector of the ATLAS experiment to extraordinary levels of radiation.
    A maximum fluence of $9.2\times10^15$cm$^{-2} 1$MeV $n_{eq}$ in the harshest radiation
    region at the innermost...

    Go to contribution page
  427. Yuxiao Wang (Tsinghua University (CN))
    28/05/2026, 17:27
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    FireworksWeb is a web-based event display utilizing a C++ ROOT/EVE backend with SAPUI5 frontend for interactive 3D visualization of particle physics events directly in the browser. Building upon ROOT/EVE7 and RenderCore, it eliminates local software installation while maintaining professional-grade event display capabilities. FireworksWeb is currently deployed for live event monitoring in the...

    Go to contribution page
  428. Pierfrancesco Cifra (CERN)
    28/05/2026, 17:27
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    During Long Shutdown 3 (LS3), the LHCb experiment will undergo a major upgrade, requiring a new data centre to cope with the 32 Tb/s of data produced by the detector. Part of the data-acquisition infrastructure, mostly composed of Commercial Off-The-Shelf (COTS) Data Center hardware, must be installed close to the detector, which introduces several challenges, including limited underground...

    Go to contribution page
  429. Jeffrey Krupa (SLAC)
    28/05/2026, 17:27
    Track 5 - Event generation and simulation
    Oral Presentation

    Applying automatic differentiation (AD) to particle simulations such as Geant4 opens the possibility of gradient-based optimization for detector design and parameter tuning in high-energy physics. In this talk, we extend our previous work on differentiable Geant4 simulations by incorporating multiple Coulomb scattering into the physics model, moving closer to realistic detector modeling. The...

    Go to contribution page
  430. Borja Garrido Bear (CERN)
    28/05/2026, 17:27
    Track 4 - Distributed computing
    Oral Presentation

    We present the evolution of the CERN IT Monitoring (MONIT) architecture for the CERN Data Centres and WLCG Infrastructure monitoring use cases, and how it has been updated to improve scalability, interoperability, and observability. Prometheus has been introduced as the core metrics collection and aggregation system and the previous Collectd-based framework is being replaced by Prometheus...

    Go to contribution page
  431. Deniz Tuana Ergonul, Shyam Bhuller (University of Oxford (GB))
    28/05/2026, 17:27
    Track 2 - Online and real-time computing
    Oral Presentation

    The Data Acquisition (DAQ) system of the Deep Underground Neutrino Experiment (DUNE) at the Sanford Underground Research Facility must receive detector data aggregated over multiple 100 Gbps Ethernet streams from the Far Detector modules front-end electronics. This contribution outlines the performance tuning and evaluation of high-performance COTS (Commercial Off-The-Shelf) readout servers,...

    Go to contribution page
  432. Dr Alexey Boldyrev
    28/05/2026, 17:27
    Track 3 - Offline data processing
    Oral Presentation

    The reliability and reproducibility of machine learning models are critically important for their use in automated systems. In the field of HEP, this may include detector optimization, use in blind analysis, and situations where estimates of model uncertainties are required. Building upon our previous research on developing robust model selection algorithms, we propose and comprehensively test...

    Go to contribution page
  433. Dr Danila Oleynik (Joint Institute for Nuclear Research (RU))
    28/05/2026, 17:27
    Track 2 - Online and real-time computing
    Oral Presentation

    The Spin Physics Detector (SPD) is currently under construction at the second interaction point of the NICA collider at JINR. Its primary physics goal is to test fundamental aspects of Quantum Chromodynamics by studying the polarized structure of the nucleon and investigating spin-dependent phenomena in collisions of longitudinally and transversely polarized protons and deuterons. These...

    Go to contribution page
  434. Jonas Wurzinger (Technische Universitat Munchen (DE))
    28/05/2026, 17:27
    Track 9 - Analysis software and workflows
    Oral Presentation

    Despite decades of searching for the true nature of dark matter, no compelling evidence of its particle nature has been found. Without this evidence, the targets of searches for new physics must be carefully re-evaluated in terms of their theoretical completeness and experimental relevance. Exploring high-dimensional parameter spaces, such as the 19-dimensional phenomenological Minimal...

    Go to contribution page
  435. Ragansu Chakkappai (IJCLab-Orsay)
    28/05/2026, 17:45
    Track 9 - Analysis software and workflows
    Oral Presentation

    In collider-based particle physics experiments, independent events are commonly represented as tabular datasets of high-level variables, an approach widely used in multivariate and machine learning analyses. Inspired by the success of foundation models in language and vision, recent developments have introduced tabular foundation models such as TabNet (Google), TabTransformer (Amazon), TABERT...

    Go to contribution page
  436. Maksym Naumchyk, Maksym Naumchyk
    28/05/2026, 17:45
    Track 3 - Offline data processing
    Oral Presentation

    Awkward Array is a widely used library in high-energy physics (HEP) for representing and manipulating nested, variable-length data in Python. Previous CHEP contributions have explored GPU acceleration for Awkward Array, demonstrating the feasibility and performance benefits of CUDA-based backend while also identifying limitations related to irregular data access, fine-grained kernel launches,...

    Go to contribution page
  437. Tomonori Takahashi (RCNP, University of Osaka)
    28/05/2026, 17:45
    Track 2 - Online and real-time computing
    Oral Presentation

    The SPADI Alliance in Japan is developing a common, trigger-less streaming data acquisition (DAQ) platform to address the increasing demands of modern nuclear and particle physics experiments. The Alliance integrates R&D efforts from front-end electronics to computing and networking, promoting open collaboration across laboratories.
    At the hardware level, the platform is developing a family...

    Go to contribution page
  438. Julius Hrivnac (Université Paris-Saclay (FR))
    28/05/2026, 17:45
    Track 1 - Data and metadata organization, management and access
    Oral Presentation

    This contribution presents the architecture and implementation of an intelligent database system for astronomical alerts produced by the Zwicky Transient Facility (ZTF) and the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). The system is designed to support efficient exploration of large-scale alert streams through both traditional query mechanisms and advanced...

    Go to contribution page
  439. Elisabetta Ronchieri
    28/05/2026, 17:45
    Track 5 - Event generation and simulation
    Oral Presentation

    Validation testing of the physics content of Monte Carlo particle transport systems—used extensively in high-energy, astroparticle, and nuclear physics—requires extensive retrieval of pertinent experimental measurements from the scientific literature. This process often entails examining thousands of papers published over several decades. The rapidly growing volume of literature poses a...

    Go to contribution page
  440. Diego Ciangottini (INFN, Perugia (IT))
    28/05/2026, 17:45
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    The acceleration of machine learning and domain algorithm inference is increasing in importance as the LHC and other domains seek to improve reconstruction and analysis performance in extreme environments. At the same time, the geographically distributed computing infrastructure model is increasing in complexity, with the introduction of heterogeneous resources (HPC, HTC, cloud). There is...

    Go to contribution page
  441. Pawel Maciej Plesniak (Imperial College (GB))
    28/05/2026, 17:45
    Track 2 - Online and real-time computing
    Oral Presentation

    DUNE is a long-baseline neutrino oscillation experiment utilizing several detectors at both the Near Detector (ND) and Far Detector (FD) facilities. The design and architecture of the FD control and data acquisition (DAQ) system have progressed with the successful operation of the ProtoDUNE-II FD prototypes at CERN. The control system architecture has evolved from a single monolithic structure...

    Go to contribution page
  442. Raghuvar Vijayakumar (University of Freiburg (DE))
    28/05/2026, 17:45
    Track 4 - Distributed computing
    Oral Presentation

    Distributed computing infrastructures are shared by multiple research communities, particularly within High Energy Physics (HEP), where precise and transparent resource accounting is critical. To meet these demands, we developed AUDITOR (AccoUnting DatahandlIng Toolbox for Opportunistic Resources), a flexible, modular, and extensible accounting ecosystem designed for heterogeneous computing...

    Go to contribution page
  443. 隗立畅 weilc (IHEP)
    28/05/2026, 17:45
    Track 8 - Analysis infrastructure, outreach and education
    Oral Presentation

    Analyzing ROOT files stored in remote Data Lakes (S3) presents a significant bottleneck: traditional workflows requiring full file downloads incur high latency, while pure client-side solutions (e.g., JSROOT) frequently cause browser memory exhaustion (OOM) when parsing gigabyte-scale binaries.
    To resolve this, we developed a lightweight, hybrid visualization microservice that decouples data...

    Go to contribution page
  444. Thomas Owen James (CERN)
    Track 2 - Online and real-time computing
    Poster Presentation

    The Compact Muon Solenoid (CMS) experiment at the CERN LHC has traditionally relied on a highly selective Level-1 trigger to reduce the 40 MHz LHC collision rate to a rate more manageable for data-reading and recording. This selection inherently limits access to event types with large irreducible backgrounds or with unconventional signatures. During LHC Run 3, CMS deployed a novel 40 MHz data...

    Go to contribution page
  445. Kai Yi (Nanjing Normal University (CN))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    For many university-based HEP groups, the path to becoming a production-ready WLCG Tier-3 center can seem daunting, often constrained by limited budgets, small teams, and a steep learning curve for grid middleware. This poster presents the evolution of the NNU HEP Farm not just as a site report, but as a practical case study and blueprint for other groups embarking on a similar journey.
    We...

    Go to contribution page
  446. Sambit Sarkar (Tata Institute of Fundamental Research Mumbai)
    Track 3 - Offline data processing
    Poster Presentation

    The GRAPES-3 experiment aims to study high-energy cosmic rays through their production mechanisms, propagation, and sources. Located in Ooty at an altitude of 2200 m, it spans an area of 25,000 m$^2$ and comprises about 400 plastic scintillator detectors (SDs) arranged with 8 m spacing to measure the charged component of extensive air showers, along with a dedicated muon detector consisting of...

    Go to contribution page
  447. Sandro Christian Wenzel (CERN)
    Track 4 - Distributed computing
    Poster Presentation

    ALICE has undergone a substantial software transformation from Run 2 to Run 3, embracing a message-passing, distributed-computing paradigm that unifies online and offline processing. Building on this shift, we present the Monte Carlo (MC) production framework developed within the O2DPG environment, which orchestrates full Run 3 and Run 4 simulation workflows across the heterogeneous computing...

    Go to contribution page
  448. Dr Brij Kishor Jashal (Rutherford Appleton Laboratory)
    Track 4 - Distributed computing
    Poster Presentation

    From Probes to Policy: Harmonising ATLAS Resource Health Signals

    The operational status of WLCG resources in ATLAS is determined through several parallel mechanisms: probe results and declared downtimes (Switcher), persistent failures in functional or performance tests (HammerCloud), and data transfer or storage exclusion conditions managed by distributed data management (DDM). ATLAS...

    Go to contribution page
  449. Javier Prado Pico (Universidad de Oviedo (ES))
    Track 2 - Online and real-time computing
    Poster Presentation

    In preparation for the High-Luminosity LHC, the CMS experiment is upgrading its Level-1 Trigger system to handle increased luminosity and pile-up. The new trigger system opens up a plethora of possibilities to detect non-conventional signatures such as those arising from long-lived particles (LLPs). In particular such LLPs may decay far away from the interaction point and decay to hadrons on...

    Go to contribution page
  450. Maria Mateea Popescu (National University of Science and Technology POLITEHNICA Bucharest (RO))
    Track 4 - Distributed computing
    Poster Presentation

    Authors: Maria-Mateea Popescu (CERN, maria.mateea.popescu@cern.ch),
    Costin Grigoraș (CERN, costin.grigoras@cern.ch),
    Cristian Mărgineanu (National University of Science and Technology Politehnica Bucharest, cristian.margineanu@stud.acs.upb.ro)
    on behalf of the ALICE collaboration

    MonALISA serves as the monitoring backbone for the distributed computing infrastructure of the ALICE...

    Go to contribution page
  451. Dr Hao-Kai Sun (IHEP, CAS)
    Track 9 - Analysis software and workflows
    Poster Presentation

    With the advent of 4th-generation photon sources, the diversity and volume of data from multi-disciplinary beamlines present practical challenges for efficient analysis. This presentation introduces a modular workflow management system designed to streamline data processing pipelines. Our work focuses on: (1) a hierarchical encapsulation mechanism to help beamline scientists and users share...

    Go to contribution page
  452. Ben Jones (CERN)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    CERN manages over 10,000 Windows devices – from simulation-heavy workstations to security-hardened desktops and servers critical for accelerator controls. For two decades, this was done with CMF, the CERN-built device management solution. Today, we are gradually moving to mainstream solutions such as Microsoft Intune and Configuration Manager, aiming to leverage industry standard off-the-shelf...

    Go to contribution page
  453. Savva Savenkov (INR RAS, MIPT(NRU))
    Track 6 - Software environment and maintainability
    Poster Presentation

    The integration of diverse high-energy collision Monte Carlo models into a unified simulation workflow is usually time-consuming. This is primarily because these models are conventionally developed as monolithic applications with heterogeneous data input and output formats. As a result, a need for multiple converters and auxiliary scripts arises, which not only impedes the modeling process but...

    Go to contribution page
  454. Daniele Spiga, Diego Ciangottini (INFN, Perugia (IT)), Francesco Brivio (Universita & INFN, Milano-Bicocca (IT)), Giulio Bianchini (Universita e INFN, Perugia (IT)), Massimo Sgaravatto (Universita e INFN, Padova (IT)), Mirko Mariotti (Universita e INFN, Perugia (IT)), Paolo Dini, Simone Gennai (Universita & INFN, Milano-Bicocca (IT))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    In the last years, INFN has consolidated and expanded its distributed computing infrastructure toward heterogeneous hardware systems, also thanks to the ICSC and TeraBIT projects, funded in the context of the Italian National Recovery and Resilience Plan. Among the most innovative components of the national federation are the specialized hardware clusters known as HPC Bubbles, in particular...

    Go to contribution page
  455. Lauren Meryl Hay (SUNY Buffalo), Rishabh Jain (Brown University (US))
    Track 5 - Event generation and simulation
    Poster Presentation

    Validating that a full phase-space reweighting of a Monte Carlo prediction preserves the physical fidelity of the underlying model can be challenging, and often relies on comparisons to marginalized 1D histograms of kinematic variables that can mask subtle biases of the original high-dimensional unbinned prediction. In this poster, we present a novel, unbinned approach to comparing the...

    Go to contribution page
  456. Oxana Smirnova, Oxana Smirnova (Lund University)
    Track 9 - Analysis software and workflows
    Poster Presentation

    We present a prototype implementation of a particle physics analysis workflow using Snakemake for an ATLAS anomaly detection search. Snakemake provides a flexible and scalable workflow for managing thousands of jobs with complex dependencies, supporting execution both locally and across different HPC environments. The workflow cleanly separates small-scale tasks, such as plotting, histogram...

    Go to contribution page
  457. Dr Wenshuai Wang (Institute of High Energy Physics)
    Track 4 - Distributed computing
    Poster Presentation

    High-energy physics experiments typically involve a large number of computing jobs and generate massive volumes of data. When users submit numerous jobs and produce substantial datasets, they often face challenges such as monitoring the status of multiple jobs and conducting statistical analysis on the data. To address these issues, we have developed a web-based job and data management...

    Go to contribution page
  458. Minh-Tuan Pham (University of Wisconsin Madison (US))
    Track 3 - Offline data processing
    Poster Presentation

    Charged-particle track reconstruction is an important part of modern collider experiments such as ATLAS and CMS that will face challenging conditions in the future High Luminosity phase of the LHC due to high pile-up. The increasing time and compute costs associated with the current tracking algorithm have spurred the development of machine learning (ML) alternatives to high degrees of...

    Go to contribution page
  459. Anwar Ibrahim
    Track 5 - Event generation and simulation
    Poster Presentation

    Detailed simulation of particle interactions in calorimeters represents a major computational bottleneck for high-energy physics experiments, particularly in the upcoming High-Luminosity LHC (HL-LHC) era. While Generative Adversarial Networks (e.g., CaloGAN) have demonstrated the potential of ML-based fast simulation, they often suffer from mode collapse and limited precision in modeling...

    Go to contribution page
  460. Ricardo Rocha (CERN)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    The increasing use of GPUs and accelerator-based computing for simulation, reconstruction and machine learning has significantly expanded scientific capabilities in HEP. However, these workloads also introduce new challenges in terms of energy consumption, operational cost and overall carbon footprint, especially as computing demand grows with future experiments.

    This contribution presents...

    Go to contribution page
  461. David Schultz (University of Wisconsin-Madison)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    The IceCube Neutrino Observatory has accumulated over 15 years of science data, with more years to come. This data has previously been archived in a distributed setup according to accessibility needs and processing level. Trigger-level data is stored at NERSC’s tape system for “online” storage and on physical hard drives stored in a climate controlled room on a shelf for “offline” storage at...

    Go to contribution page
  462. Stefan Krischer (RWTH Aachen University)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    As global greenhouse gas emissions continue to rise, a significant share originates from growing resource consumption in research environments. Beyond energy use, this includes human resources, infrastructure, equipment, and material life cycles. In research on Universe and Matter, the increasing reliance on large-scale infrastructures and complex digital workflows further amplifies this...

    Go to contribution page
  463. Pawel Kopciewicz (CERN)
    Track 6 - Software environment and maintainability
    Poster Presentation

    We present a suite of applications for an agentic chatbot to enhance workflows in the LHCb Real-Time Analysis (RTA). The first presented use case allows experiment operators to request, via natural language on the Mattermost platform, the automated generation of monitoring plots—such as trigger rate or detector temperature versus time—from live or historical subsystem data. This functionality...

    Go to contribution page
  464. Dr Peng Hu (Institute of High Energy Physics, Chinese Academy of Sciences)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    In large-scale scientific research, experimental data faces high acquisition costs and a shortage of high-quality data, while a significant amount of critical data is scattered in unstructured forms across various scientific literature. To address this issue, this study proposes an artificial intelligence framework for constructing high-quality knowledge bases from literature corpora and its...

    Go to contribution page
  465. Alexey Rybalchenko (G)
    Track 6 - Software environment and maintainability
    Poster Presentation

    Large Language Models (LLMs) are transforming software development and data analysis workflows in many fields, including nuclear and particle physics experiments.
    However, deploying LLMs in production research environments requires careful attention to scalability, security, and resource efficiency.
    This work presents a versatile production-grade LLM inference and document intelligence...

    Go to contribution page
  466. Pierfrancesco Cifra (CERN)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    In the exabyte era, physical science research infrastructures will have to deal with massive quantities of raw data by relying on large heterogeneous computing facilities. In the LHCb context, the ODISSEE project aims to maximize the computational performance and reliability of those systems while reducing the required energy and the total cost of ownership by using AI tools and techniques. By...

    Go to contribution page
  467. Rosa Petrini (Universita e INFN, Firenze (IT))
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    Over the past decade, Machine Learning and Artificial Intelligence technologies have evolved at an extraordinary pace, making collaboration among geographically distributed experts and students more critical than ever.
    The AI_INFN Platform is designed to play a key role in providing access to hardware accelerators for research communities in both fundamental and applied physics.
    The platform...

    Go to contribution page
  468. Xuantong Zhang (Institute of High Enegry Physics, Chinese Academy of Sciences (CN)), Dr Yujiang BI (Institute of High Energy Physics, Chinese Academy of Sciences)
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    With the emergence and continuous evolution of various new development and analysis tools like Jupyter and VSCode, the demand for interactive data analysis has been steadily increasing, leading to significant changes in the traditional high-energy physics data analysis workflow.
    To meet the growing and evolving needs of high-energy physics users in data analysis and processing, an all-in-one...

    Go to contribution page
  469. Mr 点 刘 (高能物理研究所)
    Track 9 - Analysis software and workflows
    Poster Presentation

    Large scientific facilities such as the High Energy Photon Source (HEPS) generate massive volumes of heterogeneous experimental data during operation. These data exhibit remarkable diversity in terms of scale, structure, and distribution characteristics, imposing extremely high requirements on the real-time response capability and long-term archive storage efficiency of data processing...

    Go to contribution page
  470. Hong Wang
    Track 9 - Analysis software and workflows
    Poster Presentation

    High-energy physics experiments such as BESIII produce large volumes of event-level data stored in ROOT-based formats and represented by collections of particle tracks and associated information. While these data are fundamental to physics analyses, their highly structured representations are not directly compatible with modern large language models (LLMs) and AI-driven reasoning systems....

    Go to contribution page
  471. Shiyuan Li (Nanyang Normal University)
    Track 9 - Analysis software and workflows
    Poster Presentation

    Space astronomy satellites serve as critical infrastructure in the field of astrophysics, and data processing is one of the most essential processes for conducting scientific research. The Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences has undertaken the development and construction of multiple space astronomy satellites, including HXMT, GECAM, SVOM, eXTP and CATCH....

    Go to contribution page
  472. Christopher Barnes (IT-CD-CC)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    CERN operates a large and distributed computing environment in which provisioning, configuration, and operational state are handled by different systems. Since 2012, the IT department has invested heavily in bridging these areas under the Agile Infrastructure project. Open-source projects such as OpenStack, Puppet, and Foreman have been integrated with in-house services to offer a cohesive...

    Go to contribution page
  473. Torri Jeske (Jefferson Lab)
    Track 2 - Online and real-time computing
    Poster Presentation

    At Jefferson Lab, the CEBAF Online Data Acquisition (CODA) kit and the commonly used front end electronics modules are recently upgraded to support streaming readout data acquisition (DAQ). Depending on the use case, the streaming DAQ data may consist primarily of empty time frames during cosmic runs, or it may be dominated by background signals. A tool kit that applies user-defined online...

    Go to contribution page
  474. Christian Voss, Marina Sahakyan, Mr Tigran Mkrtchyan (DESY)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    The dCache project provides an open-source, highly scalable distributed storage system deployed at numerous laboratories worldwide. Its modular architecture supports high-rate data ingestion, WAN data distribution, efficient HPC access, and long-term archival storage. Although initially developed for high-energy physics, dCache now serves a broad range of scientific communities with diverse...

    Go to contribution page
  475. Minghua Liao (Sun Yat-Sen University (CN))
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    Visualization tools are used to display the detector geometry and information of event hits. It plays an important role in physics analysis, data quality monitoring, algorithm optimization, physics education, and public outreach. Unity, as a powerful game engine, exhibits advantages such as high-performance rendering, multi-platform support, and rich tools and features, making it suitable for...

    Go to contribution page
  476. Mr George Raduta (CERN)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    The Bookkeeping application is the central logbook and state-tracking system of the ALICE experiment at CERN’s Large Hadron Collider, serving detector operations, data taking, and analysis workflows across Run 3, and the forthcoming Long Shutdown 3 (LS3) and Run 4. While its initial design addressed requirements anticipated before Run 3, operational experience and extended use by both...

    Go to contribution page
  477. Ujval Madhu (Research Engineer)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    High-performance data management systems are foundational to modern scientific facilities, particularly in high-energy physics (HEP) and nuclear physics (NP) where experiments generate massive datasets. The Large Hadron Collider produces 5 petabytes daily, while the High-Luminosity LHC upgrade will require 10× greater capacity by 2030. Individual experiments document their solutions, yet...

    Go to contribution page
  478. Lorenzo Rinaldi (Universita e INFN, Bologna (IT))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    The ICSC national research center was established as part of the Italian National Recovery and Resilience Plan (PNRR) with the aim of strengthening scientific research and technological innovation in the fields of supercomputing, big data, and quantum computing. This contribution presents the main activities conducted by the Italian community of ATLAS experiment within the ICSC project,...

    Go to contribution page
  479. Tarik Ourida
    Track 2 - Online and real-time computing
    Poster Presentation

    Standard Level-1 trigger algorithms treat collision events as statistically independent, a design choice that simplifies implementation but prevents models from leveraging short term variations in detector performance. These fluctuations can transiently distort reconstructed features and weaken the stability of fast classification algorithms. To address this limitation, we introduce Context...

    Go to contribution page
  480. Theodoros Chatzistavrou (National Technical Univ. of Athens (GR))
    Track 2 - Online and real-time computing
    Poster Presentation

    The LHC experiments have so far calibrated and re-reconstructed data typically years after the end of data-taking to make them usable for precision physics analyses, costing millions of CPU hours. This approach becomes untenable at the HL-LHC, with 10 times larger datasets. Jet energy corrections (JEC) are among the dominant sources of systematic uncertainty in many physics analyses and...

    Go to contribution page
  481. Sebastian Wozniewski (Georg August Universitaet Goettingen (DE))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    On batch systems with many jobs sharing a worker node, the draining of a node in order to terminate it for operational purposes without job abortions leads to idle CPU cores and a loss of compute time. This is becoming a prominent issue at German university-based Tier-2 centres, in particular. Towards the High-Luminosity LHC, they are undergoing a transformation and CPU will be provided via...

    Go to contribution page
  482. Gianmaria Del Monte (CERN)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    The continuous growth in data volumes and diversification of access patterns in high-energy physics (HEP) are gaining attention in storage systems that offer both extreme performance and ease of use. To explore the potential of modern flash technologies for scientific workloads, we conducted a comprehensive benchmarking campaign for a PureStorage all-flash appliance, focusing on its...

    Go to contribution page
  483. Anna Kravchenko (CERN), Felice Pantaleo (CERN)
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    The High-Luminosity LHC era is pushing experiments toward more complex software stacks, high throughput data processing, heterogeneous computing architectures, DAQ, and AI real-time decision making. To strengthen community capacity for next-generation trigger and data-processing systems, we present the CERN STEAM Academy: a 10-week, hands-on programme hosted at CERN developed within the Next...

    Go to contribution page
  484. Valentin Volkl (CERN)
    Track 4 - Distributed computing
    Poster Presentation

    The CernVM-Filesystem (CVMFS) is a global, read-only, on-demand filesystem optimized for software distribution. CVMFS is also a very efficient way of distributing container images and can be used with container runtimes such as Apptainer or Containerd to lazy-load images. The unpacked.cern.ch repository at CERN, a service that allows users to publish container images to CVMFS has become one of...

    Go to contribution page
  485. Yao Zhang
    Track 3 - Offline data processing
    Poster Presentation

    Charged particle tracking is a critical task for physics analysis. In this work, we propose applying reinforcement learning (RL) for reconstructing particle trajectories in drift chambers. Our designed workflow uses the output of a graph neural network (GNN) as the observation for RL. Agent training employs a reward metric derived from Monte Carlo truth information, with the objective of...

    Go to contribution page
  486. Laurence Field (CERN)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    For over two decades, the LHC@home volunteer computing project has provided additional opportunistic computing capacity to support the scientific research conducted at CERN. With the retirement of the SixTrack application, the only natively executable one, there has been a significant reduction in job throughput. This paper highlights the difference in job throughput between the native and...

    Go to contribution page
  487. CMS Collaboration
    Track 9 - Analysis software and workflows
    Poster Presentation

    The CMS Collaboration has, for several years, relied on correctionlib as the central framework for producing, validating, and distributing analysis corrections in a unified and structured JSON-based format. Recent developments have significantly enhanced this framework. The deployment of correction files for all major physics objects has been fully automated through GitLab CI/CD workflows,...

    Go to contribution page
  488. CMS Collaboration
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    Bandwidth and storage limitations are a key bottleneck for many CMS physics measurements and searches. To mitigate these constraints, CMS has developed a set of techniques that increase the number of events written to disk while maintaining physics performance. These strategies remain an active area of development and are being further optimized for Phase-2.

    One such technique, RawPrime,...

    Go to contribution page
  489. Chan-anun Rungphitakchai (Chulalongkorn University (TH))
    Track 4 - Distributed computing
    Poster Presentation

    The CMS collaboration operates a large distributed computing infrastructure to meet the computing requirements of the experiment. About half a million CPU cores and an exabyte of storage are utilized to reconstruct the recorded data, simulate signals of physics processes, and analyze data. Computing resources are located at about one hundred sites around the world.

    Monitoring the...

    Go to contribution page
  490. CMS Collaboration
    Track 3 - Offline data processing
    Poster Presentation

    CMS is transitioning to use ROOT’s new RNTuple data storage format for the files CMS will write in the HL-LHC era. Based on initial tests, CMS expects faster I/O and smaller files compared to the present TTree storage format. This contribution will show a comprehensive performance comparison between RNTuple and TTree I/O using CMS AOD and MiniAOD data formats as test cases for both simulation...

    Go to contribution page
  491. Maksym Naumchyk, Maksym Naumchyk (Princeton University (US))
    Track 6 - Software environment and maintainability
    Poster Presentation

    This presentation is about my recent project as an IRIS-HEP fellow. I was working on improving the Coffea 'schemas' by simplifying how they work internally. It eventually transitioned into making a new package that would include all the simplified schemas, separated from Coffea. Eventually Coffea will use them instead of its old schemas. This new package was given the name Zipper and...

    Go to contribution page
  492. Dr Santiago Gonzalez De La Hoz (Univ. of Valencia and CSIC (ES))
    Track 7 - Computing infrastructure and sustainability
    Oral Presentation

    This work presents the consolidated contributions of the Spanish Tier-1 and Tier-2 centers to the computing infrastructure of the ATLAS experiment at the LHC. As of September 2025, our focus spans the final phase of Run 3, the ongoing preparations for the Long Shutdown 3 (LS3), and the strategic planning for the High-Luminosity LHC (HL-LHC) era. Our GRID infrastructure is continuously being...

    Go to contribution page
  493. Jogi Suda Neto (University of Alabama (US))
    Track 3 - Offline data processing
    Poster Presentation

    The underlying likelihood of a given event originating from a partonic-level process is known to be approximately invariant under the Lorentz group. We find that quantum neural networks equivariant under such continuous symmetries exhibit improved generalization, sample and training time complexity. We show that this property is induced by the number of distinct group orbits in the data, with...

    Go to contribution page
  494. Eric Lancon (Brookhaven National Laboratory (US))
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    The Collaborative Research Information Sharing Platform (CRISP) provides an integrated system for managing scientific collaboration workflows, documentation, and institutional knowledge for the future Electron Ion Collider (EIC). CRISP is designed to address practical challenges in coordinating activities across a multi-thousand-user community distributed international collaboration, using a...

    Go to contribution page
  495. Jize Yang (Sun Yat-Sen University)
    Track 2 - Online and real-time computing
    Poster Presentation

    Jiangmen Underground Neutrino Observatory (JUNO) is a large neutrino experiment located in southern China, aiming at determining neutrino mass ordering, as well as other neutrino physics topics. JUNO has completed detector commission and started data taking since Aug. 22, 2025. Data quality monitoring (DQM) system is critical for data taking,data quality control, and data analysis in any high...

    Go to contribution page
  496. Robin Hofsaess
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    With this contribution, a data-driven method for the performance comparison of Grid sites is presented.
    While the WLCG sites with an MoU typically report their performance in HS23, opportunistic sites, such as HPC or Tier-3 centers, usually don't.
    For the comparison of opportunistically used HPC clusters in Germany, a method was developed to asses the performance of these sites based on CMS...

    Go to contribution page
  497. Mr Andrea Paccagnella
    Track 3 - Offline data processing
    Poster Presentation

    The LHCf experiment measures forward neutral particle production at the LHC, providing key inputs for the tuning of hadronic interaction models used in ultra-high-energy cosmic ray physics. The reconstruction of multi-photon final states in forward experimets represents a challenging offline computing problem, due to overlapping showers, non-uniform detector response, and strong correlations...

    Go to contribution page
  498. Dr Mateusz Zarucki (CERN)
    Track 2 - Online and real-time computing
    Poster Presentation

    The Next Generation Triggers (NGT) R3 (Real-time Reconstruction Revolution) project in CMS aims to rethink the experiment’s data acquisition system, allowing its physics programme to process all collisions accepted by the Level-1 hardware-based trigger system (L1T), in view of the Phase-2 upgrade for the HL-LHC. Its main objective is to expand the High-Level Trigger (HLT) data scouting...

    Go to contribution page
  499. Caterina Marcon (Università degli Studi e INFN Milano (IT)), David Rebatto (Università degli Studi e INFN Milano (IT))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    INFN manages the DATAcloud infrastructure, a federated and scalable network of cloud computing sites. Within the framework of the ICSC project (National Research Center in High-Performance Computing, Big Data, and Quantum Computing), funded by the Italian National Recovery and Resilience Plan (PNRR), research and development activities are carried out to foster innovation in high-performance...

    Go to contribution page
  500. Wojciech Krupa (CERN)
    Track 5 - Event generation and simulation
    Poster Presentation

    The Gauss software is the main simulation framework in LHCb and handles both the event generation step and the tracking of particles through the detector material. Gauss has recently been restructured as a thin LHCb-specific software layer above an experiment-independent HEP simulation framework (Gaussino). In this talk we report on the steps that were taken toward the deployement and...

    Go to contribution page
  501. Giovanni Zago (Universita e INFN, Padova (IT))
    Track 2 - Online and real-time computing
    Poster Presentation

    The Level-1 Data Scouting (L1DS) system introduces a new real-time data acquisition and processing path in CMS that captures information reconstructed by the Level-1 Trigger at the full 40 MHz collision rate, without any preselection. For the HL-LHC era, the Level-1 Trigger will undergo a major architectural evolution, delivering significantly richer and higher-quality reconstructed physics...

    Go to contribution page
  502. Anurag Sritharan (Deutsches Elektronen-Synchrotron (DE))
    Track 6 - Software environment and maintainability
    Poster Presentation

    The CMS experiment will upgrade its detectors to cope with higher luminosities and collision rates during the High-Luminosity era of the LHC. One key upgrade of the CMS is the High Granularity Calorimeter (HGCAL), which will completely replace the current end-cap calorimeter. The hadronic calorimeter is split into two sections using different technologies, depending on the expected amount of...

    Go to contribution page
  503. Yuning Su (Sun Yat-Sen University (CN))
    Track 5 - Event generation and simulation
    Poster Presentation

    Detector identifier and geometry management system plays important role in the offline software of every nuclear and particle physics experiment. Jiangmen Underground Neutrino Observatory~(JUNO), a large neutrino experiment starting design in 2013, has completed detector construction and begins data taking in 2025. We will describe the design and implementation of the JUNO detector identifier...

    Go to contribution page
  504. Pedro Glaser De Senna (Federal University of Rio de Janeiro (BR))
    Track 6 - Software environment and maintainability
    Poster Presentation

    Developing systems with reusability in mind is often a challenge. Even when a common context for system deployment is identified, some groundwork is required before it can be adopted by different teams. The Glance project at CERN addresses this challenge by implementing modular development and reuse across over 20 systems spanning four experiments: ALICE, ATLAS, CMS and LHCb. Originally...

    Go to contribution page
  505. Xuesen Wang
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    Taishan Anti-neutrino Observatory (TAO) is a satellite experiment of Jiangmen Underground Neutrino Observatory (JUNO). It is located near the Taishan nuclear power plant (NPP) to monitor the neutrinos emitted from the NPP.
    Event display is a critical tool in High Energy Physics (HEP) experiments. It helps monitoring data taking, data quality control, event simulation, reconstruction, and...

    Go to contribution page
  506. Mike Clymer (Colorado State University (US))
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    DUNE is a next-generation neutrino oscillation experiment. During its decades-long operational lifetime, it is expected that many exabytes of data will be collected. It is critical that this data be correctly characterized with respect to its associated conditions metadata – the non-event data used to process event data during reconstruction and analysis. To meet the operational scale and...

    Go to contribution page
  507. Mohamed Aly (Princeton University (US))
    Track 9 - Analysis software and workflows
    Poster Presentation

    The JAX framework provides automatic differentiation, JIT compilation, vectorization, and multi-hardware acceleration well-suited for statistical inference in HEP. In this contribution, we present an ecosystem of interoperable tools that leverage the power of JAX, with a focus on everwillow, an inference tool agnostic to the underlying statistical model. At the modelling layer of this...

    Go to contribution page
  508. Anwar Ibrahim
    Track 5 - Event generation and simulation
    Poster Presentation

    In this work, we investigate diffusion-based generative models as a fast simulation alternative for modeling detector response on the example of the electromagnetic calorimeter response for the LHCb experiment. We consider both classical denoising diffusion probabilistic models with Gaussian noise and their extension based on Gamma-distributed noise, which is expected to be better suited for...

    Go to contribution page
  509. Mr Andrey Shevel (Petersburg Nuclear Physics Institute named by B.P. Konstantinov of National Research Centre «Kurchatov Institute» (NRC «Kurchatov Institute» - PNPI))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    Traditional server network monitoring relies on specialized tools and complex queries, demanding significant domain expertise and being time-consuming. We propose a Digital Twin (DT) framework that provides a real-time, unified model of network behavior, enabling intuitive natural-language interactions powered by large language models (LLMs).
    The DT fuses live telemetry from monitoring...

    Go to contribution page
  510. CMS Collaboration
    Track 4 - Distributed computing
    Poster Presentation

    The CMS Submission Infrastructure (SI) provisions and orchestrates the compute resources used for CMS data processing, simulation, and analysis. While the SI has reliably supported Run-3 operations at scales of several hundred thousand concurrent jobs across Grid, HPC, and cloud sites, the computational demands of the HL-LHC era require a substantially more scalable and robust system. To...

    Go to contribution page
  511. Jacob Calcutt (Brookhaven National Laboratory (US))
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    The DUNE collaboration has an ongoing production effort to simulate the full detectors and to analyze the various prototypes that are currently running. Rucio is used to manage the 40PB of files made to date. When 500 or more jobs were sending output to Rucio simultaneously via Rucio upload, we observed timeouts, unhandled exceptions, and Rucio server restarts due to slow performance. In...

    Go to contribution page
  512. Jan de Cuveland (Goethe University Frankfurt (DE))
    Track 2 - Online and real-time computing
    Poster Presentation

    The CBM experiment at GSI/FAIR will investigate QCD matter at high baryon densities with a free-streaming, self-triggered detector readout delivering time-stamped data on approximately 5000 input links. Designed for aggregate data rates exceeding 1 TB/s, the First-level Event Selector (FLES) system performs timeslice building, aggregating these streams into overlapping processing intervals for...

    Go to contribution page
  513. Dr Giordon Holtsberg Stark (University of California,Santa Cruz (US))
    Track 9 - Analysis software and workflows
    Poster Presentation

    Statistical modeling is central to discovery in particle physics, yet the tools commonly used to define, share, and evaluate these models are often complex, fragmented, or tightly coupled to legacy systems. In parallel, the scientific Python community has developed a variety of statistical modeling tools that have been widely adopted for their performance and ease of use, but remain...

    Go to contribution page
  514. Pablo Saiz (CERN)
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    Diversity awareness requires that we provide all CERN-made multi-media content with subtitles and make them fully searchable, addressing in particular the needs of persons with impairments and speakers of foreign languages. The goal of the “Transcription and Translation as a Service” (TTaas) software [1] is to deliver a performant, privacy-preserving and cost-efficient Automated Speech...

    Go to contribution page
  515. Nikita Chalyi (Tomsk State University (TSU))
    Track 5 - Event generation and simulation
    Poster Presentation

    In this work, we describe enhancements to the hadronic de-excitation models implemented in the Geant4 toolkit. We extend the comprehensive and independent validation system for these models, covering a wide range of tests in the moderate energy region, from the reaction threshold to 3 GeV. The underlying processes have a defining impact on the formation of hadronic showers and the resulting...

    Go to contribution page
  516. Dr Maximilian Horzela (Georg August Universitaet Goettingen (DE))
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    The future Inner Tracker (ITk) of the ATLAS experiment will replace the current Inner Detector to maintain excellent tracking and vertexing performance under the challenging conditions of the High-Luminosity LHC (HL-LHC). It must withstand significantly increased radiation levels and occupancy while handling higher data rates and extending forward coverage. At the same time, with more than 150...

    Go to contribution page
  517. Mario Rey Regulez (CERN)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    For several years, CERN has provided access to Windows remote desktops through its Windows Terminal Infrastructure service. As the need for stronger security measures grew, particularly around connections using Microsoft’s Remote Desktop Protocol, we began exploring ways to integrate Two-Factor Authentication (2FA) into this critical service. This presented unique challenges in CERN’s academic...

    Go to contribution page
  518. Woohyeon Heo (University of Seoul, Department of Physics (KR))
    Track 2 - Online and real-time computing
    Poster Presentation

    The ME0 Gas Electron Multiplier (GEM) detector systems will be installed for the phase-2 upgrade of the Compact Muon Solenoid (CMS) experiment in the Large Hadron Collider (LHC). The ME0 detectors, located in each endcap of the muon system, are the only muon detectors that cover the range 2.4 < |eta| < 2.8. Due to the high background environment, keeping the trigger rate low while maintaining...

    Go to contribution page
  519. Leonardo Mira Marins (Federal University of Rio de Janeiro (BR))
    Track 6 - Software environment and maintainability
    Poster Presentation

    The European Organization for Nuclear Research (CERN), home to the Large Hadron Collider, hosts one of the world’s largest particle physics experiments, the ATLAS experiment. To effectively support administration, workflow management, and scientific communication within ATLAS, the Glance project was established in 2003 to provide web-based automated solutions for membership, analysis tracking,...

    Go to contribution page
  520. Tyler Anderson (LBNL)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    Long-running high energy physics experiments often depend on legacy architectures for orchestrating their data. While these custom tools can be effective, the expertise to maintain them is often concentrated in limited personnel, which raises concerns of software sustainability and long-term maintenance. Transitioning to a community-supported standard like Rucio, created at CERN, offers a...

    Go to contribution page
  521. Ting-Hsiang Hsu (National Taiwan University (TW))
    Track 9 - Analysis software and workflows
    Poster Presentation

    Foundation models are large neural networks pretrained on vast datasets and adapted to many downstream tasks with minimal task-specific training. In high-energy physics, precise Monte Carlo event generators allow the simulation of billions of events, but the enormous space of beyond-Standard-Model scenarios makes training specialized large models for each analysis computationally impractical....

    Go to contribution page
  522. Jack Charlie Munday
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    The Kubernetes platform operated by CERN IT has supported scientific computing, online services and accelerator controls since 2016. It enables fully automated deployment and management of clusters with native integration to CERN storage systems (CVMFS, EOS, AFS, CEPH), authentication (SSO, Kerberos) and networking. Today the service spans more than 600 clusters across CERN’s two main...

    Go to contribution page
  523. Mario Gonzalez (CERN)
    Track 6 - Software environment and maintainability
    Poster Presentation

    The CMS experiment relies on a complex software ecosystem for detector simulation, event reconstruction, and physics analysis. As data rates and detector complexity continue to rise, scaling this software efficiently across distributed resources has become essential. We present the extension of the CMS Software (CMSSW) into a fully distributed application, enabling a single logical workflow to...

    Go to contribution page
  524. Giovanni Zago (Universita e INFN, Padova (IT))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    CloudVeneto is a private cloud targeted to scientific communities, based on OpenStack software, designed in 2013 to support INFN projects, initially mostly Nuclear Physics and HEP ones. During the last 12 years it evolved by integrating resources and use cases of several Departments of the University of Padova. It currently supports several scientific disciplines of different domains, but it...

    Go to contribution page
  525. Jade Chismar (UC San Diego)
    Track 2 - Online and real-time computing
    Poster Presentation

    The upgrade of the Large Hadron Collider (LHC) to the High-Luminosity LHC (HL-LHC) will increase the number of proton-proton collisions by several-fold, and thus place a large demand on computing resources for charged particle tracking. The Line Segment Tracking (LST) algorithm is a novel, highly parallelizable algorithm that can run efficiently on GPUs and has been integrated into the CMS...

    Go to contribution page
  526. Florian Uhlig (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    Track 9 - Analysis software and workflows
    Poster Presentation

    FairRoot is a framework for simulation, reconstruction, and analysis of nuclear and high energy physics experiments. It provides the necessary building blocks that allow users to easily implement their specific experimental setup. Originally started as a project at GSI focused on a specific experiment, FairRoot has evolved into a widely used platform by various experiments worldwide,...

    Go to contribution page
  527. Ruben Lopez Ruiz (Universidad de Cantabria and CSIC (ES)), Celia Fernandez Madrazo (Boston University (US)), Sergio Sanchez Cruz (Universidad de Oviedo (ES)), Lara Lloret Iglesias (Universidad de Cantabria and CSIC (ES)), Pablo Martinez Ruiz Del Arbol (Universidad de Cantabria and CSIC (ES))
    Track 5 - Event generation and simulation
    Poster Presentation

    Muography is an emergent non-destructive testing technique that uses cosmic muons to probe the interior of objects and structures. This technique can be employed to perform preventive maintenance of critical equipment in the industry in order to test the structural integrity of the facility. Several muography imaging algorithms based on machine learning methods are being developed in the...

    Go to contribution page
  528. Dr Alexey Boldyrev
    Track 5 - Event generation and simulation
    Poster Presentation

    Focusing Aerogel Ring Imaging CHerenkov detector (FARICH) is a promising particle identification technology for the SPD expertiment. A free-running (triggerless) data acquisition pipeline to be employed in the SPD results in a high data rate necessitating new approaches to event generation and simulation of detector responses. In this work, we propose several machine learning based approaches...

    Go to contribution page
  529. Mr Andrey Kirianov (A.Alikhanyan National Science Laboratory (AM))
    Track 4 - Distributed computing
    Poster Presentation

    The Spin Physics Detector (SPD), currently under construction at the NICA complex at JINR, is expected to generate large volumes of data. It is therefore assumed that at least some members of the SPD Collaboration will contribute significant computing and storage resources. Unlike in large-scale grids, the number of participating sites is not so large and most of them will be located in Russia...

    Go to contribution page
  530. Berk Balci (CERN), Francesco Giacomini (INFN CNAF)
    Track 4 - Distributed computing
    Poster Presentation

    INDIGO IAM is a central Identity and Access Management service for distributed research infrastructures, supporting authentication and authorization at scale. As the number of relying services and users continues to grow, improving the performance and efficiency of IAM operations has become a key objective. One of the most significant performance bottlenecks identified in the current...

    Go to contribution page
  531. Yipu Liao (Institute of High Energy Physics, CAS, Beijing)
    Track 3 - Offline data processing
    Poster Presentation

    Denoising and track reconstruction in drift chambers fundamental to particle identification and momentum measurement at electron-positron colliders. While Transformer architectures have revolutionized many sequence-processing domains, their potential for track reconstruction in high-energy physics has not been fully explored. In this work, we introduce Transformer-based methods at two stages...

    Go to contribution page
  532. Ben Jones (CERN)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    CERN's computing infrastructure manages thousands of services across a complex distributed environment, requiring robust secret management for application credentials, root accounts, certificates, and service tokens. This paper explores CERN's transition from puppet-oriented, in-house secrets management solutions to HashiCorp Vault as a centralized, enterprise-level secret management...

    Go to contribution page
  533. Dr Naomi Jarvis (Carnegie Mellon University)
    Track 3 - Offline data processing
    Poster Presentation

    GlueX is a hadronic physics photoproduction experiment based at Jefferson Lab. The GlueX spectrometer and beamline detectors include over a dozen individual detectors whose performance and calibrations are mostly independent. During data collection, the data are divided into a series of runs, lasting up to 2 hours each, with the run boundaries acting as calibration boundaries. Data quality...

    Go to contribution page
  534. Tadej Novak (Jozef Stefan Institute (SI))
    Track 5 - Event generation and simulation
    Poster Presentation

    Simulating physics processes and detector responses is essential in high energy physics and represents significant computing costs. Generative machine learning has been demonstrated to be potentially powerful in accelerating simulations, outperforming traditional fast simulation methods. The efforts have focused primarily on calorimeters.

    This contribution presents the very first studies on...

    Go to contribution page
  535. Rosa Petrini (Universita e INFN, Firenze (IT))
    Track 5 - Event generation and simulation
    Poster Presentation

    Diamond detectors with laser-graphitized electrodes orthogonal to the surface are emerging as fast, full-carbon sensors for applications ranging from High Energy Physics to Nuclear Medicine. Recent advances in low-resistance electrode fabrication have enabled sub-100 ps timing performance. However, accurately modeling signal formation remains challenging due to the intertwined effects of...

    Go to contribution page
  536. Minh-Tuan Pham (University of Wisconsin Madison (US))
    Track 3 - Offline data processing
    Poster Presentation

    Reconstructing particle trajectories is a significant challenge in most particle physics experiments and a major consumer of CPU resources. It can typically be divided into three steps: seeding, track finding, and track fitting. Seeding involves identifying potential trajectory candidates, while track finding entails associating detected hits with the corresponding particle. Finally, track...

    Go to contribution page
  537. Aleksandr Svetlichnyi (INR RAS, MIPT(NRU))
    Track 5 - Event generation and simulation
    Poster Presentation

    Relativistic heavy-ion collisions serve as a primary tool for investigating the fundamental properties of matter under extreme conditions. The theoretical modeling of these interactions relies on various computational models whose predictive power often fluctuates across different kinematic ranges and physical observables. Furthermore, the underlying complex phenomenological chains are...

    Go to contribution page
  538. Yaosong Cheng (Institute of High Energy Physics Chinese Academy of Sciences, IHEP)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    China's High Energy Photon Source (HEPS) will complete facility construction and commissioning by the end of 2025. Data acquisition and analysis have already begun. The 14 beamlines of the first phase of HEPS will generate approximately 300PB of raw data annually, with further expansion expected in the future. This not only poses significant challenges for the reliability and read-write...

    Go to contribution page
  539. CMS Collaboration
    Track 4 - Distributed computing
    Poster Presentation

    Sciences such as High Energy Physics, Computational Biology, and other communities use distributed computing facilities to find the solutions to problems that require the execution of computationally intensive algorithms. The Open Science Grid (OSG) enables access to over 100 individual compute clusters spanning the globe for scientists from these disciplines. These sites, primarily at...

    Go to contribution page
  540. Manfred Peter Fackeldey (Princeton University (US))
    Track 9 - Analysis software and workflows
    Poster Presentation

    Modern analyses in high-energy physics (HEP) have high memory requirements due to the sheer volume of data collected in experiments at the Large Hadron Collider (LHC) at CERN.
    Awkward Array recently released a new version of lazy arrays (“virtual arrays”) that mitigates this problem by loading only the columns required for HEP analysis. Nevertheless, these columns can still add up in size,...

    Go to contribution page
  541. CMS Collaboration
    Track 2 - Online and real-time computing
    Poster Presentation

    Our world has witnessed a massive explosion of data and a surge of machine learning (ML) and AI applications. The result is an ever-increasing need for higher throughput and real-time computing capabilities. The Large Hadron Collider (LHC) and its experiments provide the perfect benchmark to bring the recent industry developments and explore beyond-the-state-of-the-art technologies to process...

    Go to contribution page
  542. Rafaella Lenzi Romano (Federal University of Rio de Janeiro (BR))
    Track 6 - Software environment and maintainability
    Poster Presentation

    The ATLAS experiment involves over 6,000 members, including students, physicists, engineers, and researchers. This dynamic CERN environment brings up some challenges, such as information centralisation, communication, and the continuity of workflows. To overcome these challenges, the ATLAS Glance Team has developed and maintained several automated systems that rely on CERN’s Group Management...

    Go to contribution page
  543. David Schultz (University of Wisconsin-Madison)
    Track 4 - Distributed computing
    Poster Presentation

    As part of the IceCube Neutrino Observatory's move to the Pelican Platform for data transfer, our production workflow management tools also needed to be updated. There were two major changes happening at the same time: moving from X.509 certificates to tokens, and gathering the tokens at the initial dataset submission rather than during the job processing. Some significant problems had to be...

    Go to contribution page
  544. Jeremy Wilkinson (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    Jupyter is a powerful tool for data visualisation and interactive analysis with Python, and in particular JupyterHub offers a simplified way for users to run their workflows on dedicated HPC hardware. The use of JupyterHub is already widespread among many research centres and computing clusters. However, many of the existing deployments rely on specialised network setups such as a dedicated...

    Go to contribution page
  545. Pawel Kopciewicz (CERN)
    Track 2 - Online and real-time computing
    Poster Presentation

    The LHCb experiment in Run 3 features a full software trigger: the GPU-based HLT1 with O(100) and the CPU-based HLT2 with O(4000) trigger lines. Human control of every aspect of data quality in a complex system of this scale is extremely difficult and requires a high degree of automation. IntelliRTA is a monitoring dashboard that provides a holistic view of the trigger lines and the data...

    Go to contribution page
  546. Caley Luce Yardley (University of Sussex (GB))
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    In fields of research such as high-energy physics at the Large Hadron Collider (LHC), making the “big data” accessible to the public comes with its own set of challenges; traditional methods of public release put the onus on individuals to first acquire specific coding skills and may assume certain requirements on computing resources are met. This motivates the development of interactive and...

    Go to contribution page
  547. Ryunosuke O'Neil (CERN)
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    High Energy Physics (HEP) experiments increasingly rely on large volumes of Monte Carlo (MC) simulation data to estimate radiation levels and activation scenarios. Within the LHCb collaboration, we present a new system developed to simplify the management and exploration of such MC simulation outputs as obtained with the FLUKA code: the Analysis platform for Radiation Environment Simulations...

    Go to contribution page
  548. Dr Geonmo Ryu (Korea Institute of Science & Technology Information (KR))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    In small-scale scientific infrastructures typically consisting of 3–7 nodes, industry-standard orchestrators like Kubernetes often introduce an "operational gap" due to their resource-heavy control planes. Furthermore, traditional overlay networks such as VXLAN introduce significant latency and CPU overhead, which hinders the performance of data-intensive distributed scientific computing....

    Go to contribution page
  549. Mr Zhenyuan Wang (Computing center, Institute of High Energy Physics, CAS, China)
    Track 4 - Distributed computing
    Poster Presentation

    With the escalating processing demands of modern high-energy physics experiments, traditional monitoring tools are faltering under the dual pressures of cumbersome deployment and coarse-grained observability in high-throughput production environments. JobLens is a lightweight, one-click-deployable data collector designed to deliver fine-grained, job-level observability for HEP workloads. Its...

    Go to contribution page
  550. Mr Jiaheng Zou (IHEP, Beijing)
    Track 3 - Offline data processing
    Poster Presentation

    The Jiangmen Underground Neutrino Observatory (JUNO) is a large-scale neutrino experiment with multiple physics goals. After its completion at the end of 2024, commissioning for data taking began, followed by the commencement of official data-taking on August 26, 2025. The raw data acquired by the JUNO DAQ system is stored in a custom binary format. After transmission to the data center, this...

    Go to contribution page
  551. Saptaparna Bhattacharya (Southern Methodist University (US))
    Track 5 - Event generation and simulation
    Poster Presentation

    Fast and reliable event generation can be achieved with GPU compatible matrix element generators such as Madgraph and Pepper. In this talk, we present the first benchmarking exercise of running these event generators in ATLAS specific production workflows. The gains are reported as improvements in gridpack production (which contain precomputed matrix elements) times as well as event generation...

    Go to contribution page
  552. Zhijun Li (Sun Yat-Sen University (CN))
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    In high energy physics experiments, visualization plays a crucial role in detector design, data quality monitoring, offline data processing, and has great potential for improving physics analysis. In addition to traditional physics data analysis based on statistical methods, visualization offers unique intuitive advantages in the search for rare signal events and in reducing background noise....

    Go to contribution page
  553. Huey-Wen Lin
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    LGT4HEP (High-Energy Physics Computing Traineeship for Lattice Gauge Theory) is a multidisciplinary training initiative designed to prepare the next generation of researchers in computational lattice field theory and high-performance computing. The program emphasizes rigorous coursework, including lattice QCD and advanced computational methods, paired with hands-on experience on...

    Go to contribution page
  554. Daniele Martello (Università del Salento & INFN Lecce)
    Track 3 - Offline data processing
    Poster Presentation

    The Pierre Auger Observatory collects vast amounts of complex spatial-temporal data from extensive air showers induced by ultra-high-energy cosmic rays (UHECRs), i.e., those with energies above 10^18 eV. Determining the mass composition of the primary particle is a key challenge, as direct measurements are impossible and traditional analytical methods struggle with the complexity of shower...

    Go to contribution page
  555. Federico Andrea Corchia (Universita e INFN, Bologna (IT))
    Track 3 - Offline data processing
    Poster Presentation

    Identification (“tagging") of hadronic jets associated with charm and bottom quarks is crucial for many experimental signatures explored with the ATLAS detector at the LHC. Soft Muon Tagging (SMT) is a tagging technique based on the identification of muons from b/c -> mu + X within hadronic jets, complementary to other jet-based algorithms. With the SMT algorithm, muons can be used as a proxy...

    Go to contribution page
  556. Yana Holoborodko (Princeton University (US))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    We present a modular alarm and visualization framework designed to detect and interpret network anomalies that lead to performance degradation in WLCG infrastructures. The system consists of two interoperable components: Alarms And Alerts System, a Kubernetes-based backend that ingests perfSONAR measurements and automatically identifies routing changes, performance degradations, and related...

    Go to contribution page
  557. Peidong Yu (IHEP)
    Track 5 - Event generation and simulation
    Poster Presentation

    The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose experiment featuring a 20,000-ton liquid scintillator central detector, water cerenkov detector and top track, primarily designed to determine neutrino mass ordering. Following detector construction completion in late 2024, the detector was successively filled with ultrapure water and then liquid scintillator (LS). As LS...

    Go to contribution page
  558. Ioannis Tsanaktsidis (CERN)
    Track 6 - Software environment and maintainability
    Poster Presentation

    The continuous ingestion of scientific documents from external sources into INSPIREHEP created challenges in scalability, transparency, and long term maintenance. This contribution describes the migration of our document harvesting and curation pipeline to the open source workflow orchestrator Apache Airflow. The work involved re-engineering legacy scripts and cron based tasks into modular...

    Go to contribution page
  559. Leonardo Giannini (Univ. of California San Diego (US))
    Track 3 - Offline data processing
    Poster Presentation

    The mkFit algorithm offers an implementation of the Kalman filter-based track reconstruction algorithm that exploits both thread- and data-level parallelism. mkFit has been adopted by the CMS collaboration as the main track building algorithm for both the Run-3 offline and online track reconstruction, and it has been to speed up track building by 3.5x on average, while retaining or improving...

    Go to contribution page
  560. Gaia Grosso (IAIFI, MIT)
    Track 9 - Analysis software and workflows
    Poster Presentation

    Machine-learning-based anomaly detection (AD) offers a promising, model-agnostic alternative to traditional LHC analyses, allowing us to search for many signals at once. Recent AI advances in representation learning motivate the use of neural embeddings to map detector data into low-dimensional latent spaces, preserving critical features ([Metzger et al., Phys. Rev. D 112, 072011 (2025)][1])....

    Go to contribution page
  561. Catalin Codreanu (Technical University of Cluj-Napoca (RO)), Cristian Schuszter (CERN)
    Track 6 - Software environment and maintainability
    Poster Presentation

    Modern financial operations in large scientific organizations increasingly rely on sustainable, modular, and well-integrated software ecosystems. Over the past years, the FAP-BC group of CERN has focused on modernizing key financial processes by adopting service-oriented approaches, strengthening system integrations, and reducing long-term maintenance costs.

    This paper presents recent work...

    Go to contribution page
  562. Dr Alex Owen (NetDRIVE Champion, Queen Mary University of London), Dr Sudha Ahuja (Queen Mary University of London)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    NetDRIVE (NetZero Digital Research Infrastructure Vision and Expertise) [1] is the UK Research and Innovation (UKRI) project developing plans and expertise to tackle NetZero issues around the UK’s government funded research computing or digital research infrastructure (DRI). Following on from the UKRI DRI NetZero Scoping project [2], NetDRIVE is a £4M project spread over the course of 2.5...

    Go to contribution page
  563. James Connaughton (University of Warwick (GB))
    Track 2 - Online and real-time computing
    Poster Presentation

    The LHCb experiment at the LHC employs a fully-software trigger to reconstruct and select events in real time. Key to this approach is the topological beauty (b) trigger, a set of algorithms which select decays of hadrons containing b quarks based on their distinct topology, i.e., highly displaced candidates with a large momentum. For Run 3 of the LHC, these algorithms were reimplemented...

    Go to contribution page
  564. Andrei Berngardt (Tomsk State University)
    Track 5 - Event generation and simulation
    Poster Presentation

    We present a new method for generating neutron cross-section (XS) data sets from evaluated nuclear data (HP) that improves the accuracy of XS approximation in resonance regions while maintaining computational efficiency for HEP applications in Geant4. Our approach supplements the standard XS datasets with additional dedicated resonance (R) files for the low-energy region, which is defined...

    Go to contribution page
  565. Yuri Smirnov (Northern Illinois University (US))
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    Calibration Operations Manager Bot for ATLAS Tile Calorimeter (COMBAT) is the next-generation of the calibration management system developed for the High-Luminosity LHC era. It combines modern AI techniques with a fully asynchronous, scalable architecture to meet the evolving operational demands of the ATLAS experiment, including the transition in the database from COOL to CREST for the...

    Go to contribution page
  566. Jessica Prendi (ETH Zurich (CH))
    Track 2 - Online and real-time computing
    Poster Presentation

    The Next-Generation Trigger (NGT) program for the CMS High Level Trigger (HLT) aims at enabling full-rate recording of all the events accepted by the Level-1 Trigger at 750 kHz via a dedicated NGT Scouting stream, performing complete physics event reconstruction with no additional filtering. Reconstructed objects are stored directly in a lightweight NanoAOD format, delivering analysis-ready...

    Go to contribution page
  567. Carla Sophie Rieger (Technische Universitat Munchen (DE))
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    Efficient database operations are crucial for processing inherently structured data. We investigate the transfer of classical database operations to their counterparts on uniform quantum superposition states of quantum data. Such data may originate from future experiments that incorporate quantum sensors and quantum memories, or by using quantum encoded classical data. Since quantum states...

    Go to contribution page
  568. Victor Leopoldo Munoz Flores (Fermi National Accelerator Lab. (US))
    Track 4 - Distributed computing
    Poster Presentation

    The File Transfer Service (FTS3) is a distributed data movement service developed at CERN and widely used to transfer data across the Worldwide LHC Computing Grid (WLCG). At Fermilab, FTS3 supports data transfers for multiple experiments, including Intensity Frontier experiments such as DUNE, enabling reliable data movement between WebDAV endpoints in Europe and the Americas.
    At CHEP 2021, we...

    Go to contribution page
  569. Dr Simon Blyth (IHEP, CAS)
    Track 5 - Event generation and simulation
    Oral Presentation

    Opticks is an open source framework that accelerates Geant4 toolkit based
    detector simulations by offloading the optical photon simulation to the GPU
    using NVIDIA OptiX ray tracing and NVIDIA CUDA computation. Geant4 detector
    geometries are auto-translated into mostly analytic Constructive Solid Geometry
    forms, with only computationally demanding shapes like tori converted...

    Go to contribution page
  570. Daniel Magdalinski (Nikhef)
    Track 2 - Online and real-time computing
    Poster Presentation

    The LHCb experiment operates a full-software trigger comprising of two stages, labelled HLT1 and HLT2. The two stages are separated by a disk buffer, which not only allows the HLT2 processing to be asynchronous with respect to the data taking, it also allows real-time alignment and calibration to be performed prior to HLT2 processing. HLT2 then performs full offline-level reconstruction and...

    Go to contribution page
  571. Alexander Rogovskiy (Rutherford Appleton Laboratory), Jyothish Thomas (STFC)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    XRootD-Ceph is a storage plugin that allows one to access Ceph object store via xroot protocol. At RAL, we use this plugin for our disk SE. Although it has proven to be suitable for production-quality high-throughput storage, a few optimizations were needed to ensure optimal performance. In this talk we discuss the evolution of the plugin at RAL.

    Some changes to the plugin were dictated by...

    Go to contribution page
  572. Shawn Gregory Zaleski (Rheinisch Westfaelische Tech. Hoch. (DE))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    As the Large Hadron Collider (LHC) finishes collecting data in Run 3, the outlook for future data collection and analysis will require even more data storage and more powerful and efficient dedicated computing resources as it looks to collect much more data in future runs.
    From the beginning of the LHC operation 15 years ago, the Germany ATLAS and CMS groups provided massive dedicated grid...

    Go to contribution page
  573. JIANLI LIU (Computing Center, Institute of High Energy Physics, Chinese Academy of Sciences)
    Track 9 - Analysis software and workflows
    Poster Presentation

    The X-ray photon correlation spectroscopy (XPCS) retrieves the nanoscale dynamic behaviors of materials by analyzing the photon intensity fluctuations in synchrotron X-ray scattering signals. The multitau algorithm calculates delay times across different temporal scales through a hierarchical binning approach, which not only covers a wide temporal range but also controls computational...

    Go to contribution page
  574. Wei Sun
    Track 6 - Software environment and maintainability
    Poster Presentation

    We present a performance‑portable lattice gauge theory simulation library implemented using the Kokkos parallel programming model. The library supports efficient Monte Carlo simulations of SU(N) gauge theories across diverse hardware architectures—including CPUs (via OpenMP and Serial backends), NVIDIA GPUs (CUDA), AMD GPUs (HIP), and Intel GPUs (SYCL)—all from a single source code base. It...

    Go to contribution page
  575. Mohammad Nasir Jan Momed (Deutsches Elektronen-Synchrotron (DE))
    Track 2 - Online and real-time computing
    Poster Presentation

    When the HL-LHC starts in a few years from now the CMS experiment will be challenged with way more complex proton-proton collision events as well as an increased data logging rate. Present projections suggest that the CPU demands for reconstruction and processing will grow beyond the capacity expected from usual technology progress. Therefore, an effort has been started to optimize software to...

    Go to contribution page
  576. Michael Johnson (University of Manchester)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    Data volumes and rates of research infrastructures will continue to increase in the upcoming years and impact how we interact with their final data products. Little of the processed data can be directly investigated and most of it will be automatically processed
    with as little user interaction as possible. Capturing all necessary information of such processing ensures reproducibility of the...

    Go to contribution page
  577. Rocky Bala Garg (Stanford University (US))
    Track 3 - Offline data processing
    Poster Presentation

    The optimization of tracking parameters in particle track reconstruction is a high-dimensional, non-convex problem with significant impact on tracking efficiency, resolution, and computational performance. As detector complexity and pileup increase, conventional heuristic and local optimization methods face scalability limitations. In this work, we will investigate quantum optimization...

    Go to contribution page
  578. Parichehr Kangazian Kangazi (The Iranian Ministry of Science, Research and Technology (IR))
    Track 3 - Offline data processing
    Poster Presentation

    Identifying jets originating from the decay of highly boosted heavy particles in colliders plays a crucial
    role in uncovering potential signs of physics beyond the Standard Model. Despite significant progress in
    jet-origin classification algorithms—particularly graph neural networks—the rapidly increasing volume
    of collider data and the demand for faster and more efficient processing...

    Go to contribution page
  579. Supanut Thanasilp (Chulalongkorn University)
    Track 3 - Offline data processing
    Poster Presentation

    Quantum systems are well known to create non-classical patterns. The thought that they could also be used to recognize highly complex patterns hidden in data is beyond excitement, leading to the young interdisciplinary field of quantum machine learning (QML). Nevertheless, while a quantum advantage in data analysis can be in principle achieved thanks to the exponentially large Hilbert space,...

    Go to contribution page
  580. Xuantong Zhang (Institute of High Enegry Physics, Chinese Academy of Sciences (CN))
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    The Interactive Analysis Workbench (INK) is a web-based, open-source interactive computing platform developed at the Institute of High Energy Physics, Chinese Academy of Sciences (IHEP, CAS), to address the growing demands of high-energy physics users for interactive data processing, efficient data access, and collaborative analysis workflows. INK enables users to interactively access IHEP...

    Go to contribution page
  581. Valentina Camagni (Università degli Studi e INFN Milano (IT))
    Track 2 - Online and real-time computing
    Poster Presentation

    The CMS Phase-2 Level-1 Trigger (L1T) Scouting program introduces real-time software reconstruction at the full 40 MHz rate, enabling physics analyses directly at trigger level. One of the most promising applications is the reconstruction of low-transverse-momentum (soft) hadronic tau leptons, which are essential for searches for low-mass scalars ϕ → ττ but are poorly reconstructed by existing...

    Go to contribution page
  582. Bostjan Macek (Jozef Stefan Institute (SI))
    Track 2 - Online and real-time computing
    Poster Presentation

    Future high-energy physics (HEP) experiments operate under extreme real-time constraints, where online filtering and trigger decisions increasingly define the ultimate physics reach. Although machine learning is now widely used in online systems, current deployments are almost exclusively limited to inference with offline-trained models. In this contribution, we investigate a complementary and...

    Go to contribution page
  583. Daniel Nieto (IPARCOS-UCM)
    Track 2 - Online and real-time computing
    Poster Presentation

    The Cherenkov Telescope Array Observatory (CTAO) represents the next generation of ground-based gamma-ray telescopes, designed to probe the very-high-energy (VHE) sky above 20 GeV with unprecedented sensitivity. The northern array (CTAO-North) will be composed of an ensemble of Medium-Sized Telescopes (MSTs) and four Large-Sized Telescopes (LSTs), the latter designed to detect the...

    Go to contribution page
  584. Aashay Arora (Univ. of California San Diego (US))
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    The increasing adoption of columnar data formats and lightweight event representations, such as CMS NanoAOD, has made remote data access a significant factor in the performance of physics analysis workflows. In this context, understanding the performance characteristics of different data serving technologies under realistic network conditions is critical.

    This work presents a comparative...

    Go to contribution page
  585. Ben Jones (CERN)
    Track 4 - Distributed computing
    Poster Presentation

    The WLCG Tier-0 Accounting service provides accounting information for scientific computing resources at CERN (Batch, HPC, and BOINC). The service delivers essential information on CPU and walltime usage, which plays a key role in decision-making and planning processes for CERN resource managers across experiments and departments. It also supplies monthly usage data to the WLCG Accounting...

    Go to contribution page
  586. Tibor Simko (CERN)
    Track 9 - Analysis software and workflows
    Poster Presentation

    Dask is a Python library for scaling Python analysis code from local computers to large data centre clusters. Dask is becoming more popular in astronomy and particle physics communities for carrying out data analyses. We describe how we extended the REANA reproducible analysis platform to support Dask workloads. A special attention was paid to respect the Dask version requested by the analyst,...

    Go to contribution page
  587. 闫明宇 myyan
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    To address the issues of traditional penetration testing being highly labour-intensive, fragmented tooling, and cumbersome processes, this paper aims to design an automated penetration testing system to reduce the manpower and time expenditure during testing.This paper proposes an overall design framework for an automated penetration testing system. The system comprises two components: an...

    Go to contribution page
  588. Kasidit Srimahajariyapong (Chulalongkorn University)
    Track 3 - Offline data processing
    Poster Presentation

    The rapid proliferation of quantum machine learning (QML) has highlighted critical bottlenecks in conventional Variational Quantum Algorithms (VQAs), particularly regarding trainability, scalability, and the absence of rigorous optimal solution guarantees. These challenges motivate us to search for alternative optimization paradigms. In this work, we introduce the Double-Bracket Quantum...

    Go to contribution page
  589. Andrea Rendina
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    INFN-CNAF is the national computing center of the INFN (National Institute for Nuclear Physics), dedicated to research and development in information technologies for subnuclear, nuclear, and astroparticle physics. CNAF hosts the largest INFN data center and operates a WLCG Tier-1 site.

    For more than 15 years, tape data management at CNAF has been handled using the Grid Enabled Mass Storage...

    Go to contribution page
  590. AVIK DE (universiti malaya)
    Track 9 - Analysis software and workflows
    Poster Presentation

    \documentclass[11pt, a4paper, twoside]{article}
    \usepackage{geometry}
    \geometry{top=2.54cm, bottom=2.54cm, left=2.54cm, right=2.54cm}

    \title{Scalar-Tensor Extension of Non-Metricity Gravity}
    \author{AVIK DE}
    %\date{January 2026}

    \begin{document}

    \maketitle

    \begin{abstract}
    We present a computation-first study of scalar--tensor extensions of symmetric teleparallel...

    Go to contribution page
  591. Mateusz Jakub Fila (CERN)
    Track 2 - Online and real-time computing
    Poster Presentation

    The Next Generation Trigger (NGT) project at CERN aims to extract more physics information from the High Luminosity LHC data. To achieve this, GPUs and other accelerators are being increasingly adopted in LHC experiments, running both procedural code and AI/ML inferences.

    As a result, formerly CPU-only modules in the event reconstruction frameworks now interleave their computations with...

    Go to contribution page
  592. Frank Ellinghaus (Bergische Universitaet Wuppertal (DE))
    Track 6 - Software environment and maintainability
    Poster Presentation

    Monte-Carlo (MC) simulations play a key role in high energy physics. MC generators and their interfaces to the experiment-specific software framework evolve continuously. Thus, a periodic validation is indispensable for obtaining reliable and reproducible physics simulations. For that purpose, ATLAS has developed a central semi-automated validation system: PMG Architecture for Validating Evgen...

    Go to contribution page
  593. John Winnicki
    Track 9 - Analysis software and workflows
    Poster Presentation

    LUX-ZEPLIN (LZ) is a dark matter direct-detection experiment using a dual-phase xenon time projection chamber. The LZ experiment has set world-leading limits on WIMP-nucleon interactions. At low energies, backgrounds built from the spurious pairing of unrelated charge and light signals, also known as accidentals, pose a significant analysis challenge. In this work, we study modern unsupervised...

    Go to contribution page
  594. Nora Bluhme (Goethe University Frankfurt (DE))
    Track 3 - Offline data processing
    Poster Presentation

    The Compressed Baryonic Matter (CBM) experiment at the upcoming Facility for Antiproton and Ion Research (FAIR) will investigate heavy-ion collisions at interaction rates of up to $10^7\, \text{s}^{-1}$.
    
To fully exploit the intrinsic precision of the tracking detectors, an accurate alignment of all sensor elements is essential. Track-based software alignment determines small but critical...

    Go to contribution page
  595. Gaia Grosso (IAIFI, MIT)
    Track 9 - Analysis software and workflows
    Poster Presentation

    Modern machine learning has revolutionized our ability to extract rich and versatile data representations across scientific domains. However, the statistical properties of these representations are often poorly controlled, challenging the design of robust downstream anomaly detection (AD) methods.
    We identify three principled desiderata for anomaly detection in latent spaces under minimal...

    Go to contribution page
  596. Paul James Laycock (Universite de Geneve (CH))
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    The first direct observation of Gravitational Waves (GWs) in 2015, produced by the collision of black holes, instigated a demand for open access to GW data. The Einstein Telescope will increase the detection rate of GWs by a factor of a thousand compared to current detectors, producing information-rich data containing a wealth of astrophysical signals. This surge in information density,...

    Go to contribution page
  597. Matthias Schott (CERN / University of Mainz)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    Training state-of-the-art neural networks for high-energy physics (HEP) tasks typically requires massive, fully simulated datasets—whose generation is both computationally expensive and experiment-specific. In this work, we demonstrate that this dependence on large-scale full simulations can be drastically reduced by leveraging pretrained models trained on fast-simulation data. These...

    Go to contribution page
  598. Ilias Tsaklidis (University of Bonn)
    Track 9 - Analysis software and workflows
    Poster Presentation

    SysVar is a Python package that provides an end-to-end solution for the treatment and propagation of systematic uncertainties in analyses relying on templates generated from simulated data.

    Propagating systematic uncertainties from correction weights into templates while preserving correlations in the signal extraction variables becomes increasingly challenging as analyses scale in size....

    Go to contribution page
  599. Hao Hu (Institute of High Energy of Physics)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    China’s High Energy Photon Source (HEPS) is the first national high-energy synchrotron radiation light source and one of the world’s brightest fourth-generation synchrotron radiation facilities. It started to operate and conduct user experiments at the end of 2025.
    The 14 beamlines for the phase I of HEPS are projected to produce more than 300PB raw data annually. Efficiently storing,...

    Go to contribution page
  600. Stefano Dal Pra (INFN)
    Track 8 - Analysis infrastructure, outreach and education
    Poster Presentation

    The Open Access Repository, active since 2020, is the official INFN archive to host its research outputs according to FAIR principles. We describe its architectural and functional evolution, marked by the migration from Invenio v3 to the high-availability deployment based on Invenio RDM. Several technical issues due to the large "version jump" between the source and target platforms have been...

    Go to contribution page
  601. Dr Alexandre Camsonne
    Track 2 - Online and real-time computing
    Poster Presentation

    The Solenoidal Large Intensity Device (SoLID) at Jefferson Laboratory (JLab) is a large acceptance detector designed to be able to handle the high luminosity available at JLab. I will present the plans for the baseline the triggered data acquisition system for the two main configuration and also discuss a streaming readout option.

    Go to contribution page
  602. Dr Danila Oleynik (Joint Institute for Nuclear Research (RU))
    Track 4 - Distributed computing
    Poster Presentation

    The Spin Physics Detector (SPD) collaboration is building a versatile detector at the second interaction point of the NICA (Nuclotron-based Ion Collider fAcility) complex. As the detector's development progresses and the physics research program evolves, the demands for advanced data processing capabilities increase.
    A defining feature of the facility is its triggerless (free-run) Data...

    Go to contribution page
  603. Yisheng Fu (Chinese Academy of Sciences (CN))
    Track 5 - Event generation and simulation
    Poster Presentation

    The LHCb experiment is planning a second major upgrade (Upgrade II) in the 2030s, with the goal of increasing the instantaneous luminosity to 1.0x1034 cm−2s−1. This upgrade aims to enhance the study of heavy flavor physics and to search for potential signals of new physics in the beauty and charm quark sectors. To operate under the demanding conditions of Upgrade II—characterized by higher...

    Go to contribution page
  604. Yuri Smirnov (Northern Illinois University (US))
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    The ATLAS TileCalibWeb Robot application is the core tool in the Tile Calorimeter and the main interface for preparing and recording conditions and calibration data into the Online and Offline ORACLE Databases used daily by on-duty data quality control specialists and experts.
    During LHC Run 3 TileCalibWeb Robot was significantly improved with numerous changes. These enhancements...

    Go to contribution page
  605. saurav mittal
    Track 9 - Analysis software and workflows
    Poster Presentation

    We present a topology-informed approach for classifying particle jets using persistent homology, a framework that captures the structural properties of point clouds. Particle jets produced in proton-proton collisions consist of cascades of particles originating from a common hard interaction. Each jet constituent is represented as a point in a three-dimensional feature space defined by the...

    Go to contribution page
  606. Gianluca Sabella (University Federico II and INFN, Naples (IT))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    The ICSC initiative (Italian National Centre for High-Performance Computing, Big Data, and Quantum Computing) is creating a flexible cloud platform to manage the escalating computational requirements of the High-Luminosity Large Hadron Collider (HL-LHC) and future collider projects. This approach leverages Kubernetes for orchestration and containerized deployments to streamline access to...

    Go to contribution page
  607. Dr Ani Fox Bochenkov (CIQ)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    Increasingly intensive AI and simulation workloads are driving thermal stress across large-scale HPC environments. As compute centres prepare for the next performance phase, conventional optimisation practices no longer align with ESG targets or hardware lifecycle requirements. This contribution presents a proven infrastructure-level methodology for energy-aware runtime orchestration that...

    Go to contribution page
  608. Oliver Lantwin (Universitaet Siegen (DE))
    Track 5 - Event generation and simulation
    Oral Presentation

    The SHiP experiment will search for new physics at the intensity frontier, particularly for feebly interacting particles. Full simulation of the signal and background is crucial to reach the planned sensitivity and to refine the subsystem designs for their TDRs. Besides standard event generators and Geant4, custom approaches are used for the efficient simulation of the thick target and...

    Go to contribution page
  609. Diego Ciangottini (INFN, Perugia (IT))
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    The Italian National Institute for Nuclear Physics (INFN) has been operating for more than 20 years the largest scientific distributed computing infrastructure: the Tier-1 at Bologna-CNAF and the 9 Tier-2 centres provide computing and storage resources to support more than 100 scientific collaborations.
    In the last years this computer infrastructure has been expanded and modernized, also...

    Go to contribution page
  610. Dr Alexey Boldyrev
    Track 5 - Event generation and simulation
    Poster Presentation

    Detector response simulation is a computationally expensive step in the Monte Carlo production chain for High Energy Physics experiments. For the MPD experiment at NICA (JINR), we developed a method to accelerate the simulation of the Time Projection Chamber (TPC) response using a Generative Adversarial Network (GAN). Trained on data from standard GEANT4-based simulations, the GAN replaces...

    Go to contribution page
  611. Mr Jiajv Wang, Prof. Linghui Wu
    Track 3 - Offline data processing
    Poster Presentation

    An upgrade of the inner tracker for the BESIII experiment has been completed in 2024. A three-layer Cylindrical GEM (CGEM) detector was installed in the BESIII detector, replacing the original inner drift chamber. For detector commissioning and alignment, cosmic-ray data were taken both with and without a magnetic field. A track reconstruction algorithm combing CGEM inner tracker (CGEM-IT)...

    Go to contribution page
  612. Angela Maria Burger (Centre National de la Recherche Scientifique (FR))
    Track 3 - Offline data processing
    Poster Presentation

    Transformer architectures have rapidly become the state-of-the-art approach for machine-learning models across many domains in science, offering unprecedented performance on complex, high-dimensional tasks. Their adoption within the ATLAS experiment, starting with their usage for flavour tagging, has opened new opportunities, but also introduced substantial challenges regarding large-scale...

    Go to contribution page
  613. Woojin Jang (University of Seoul, Department of Physics (KR))
    Track 9 - Analysis software and workflows
    Poster Presentation

    This study explores the feasibility of directly determining the CKM matrix element $|V_{ts}|$ through the rare top quark decay $t \to sW$ in the semileptonic final state of $t\bar{t}$ production. To overcome the significant background challenges inherent in this channel, we introduce a Transformer-based multi-domain $t\bar{t} \to sWbW$ signal event classifier that integrates both jet...

    Go to contribution page
  614. CMS Collaboration
    Track 2 - Online and real-time computing
    Poster Presentation

    The High-Level Trigger (HLT) of the Compact Muon Solenoid (CMS) selects event data in real time, reducing the data rate from hundreds of kHz to few kHz for offline storage. With the upcoming Phase-2 upgrade of the CMS experiment, data volumes are expected to increase substantially, making efficient, lossless compression essential for sustainable storage and processing.

    Recent work has shown...

    Go to contribution page
  615. Mr Suwannachad Suwannajitt (Chulalongkorn University)
    Track 3 - Offline data processing
    Poster Presentation

    Quantum Imaginary Time Evolution (QITE) has recently received increasing attention as a pathway for ground state preparation on quantum hardware. However, the efficiency of this approach is frequently compromised by energy plateau, dynamical regimes characterized by vanishing energy reduction where the system stagnates near some metastable states. In this work, we dissect the anatomy of these...

    Go to contribution page
  616. Andy MORRIS
    Track 6 - Software environment and maintainability
    Poster Presentation

    Since 2015, LHCb’s central onboarding resource for new collaborators has been the Starterkit, a set of self-study lessons that also form the basis of an annual in-person workshop in Geneva. Ahead of Run 3 (2022–2026), a new version of the Starterkit was
    developed to accompany the Upgrade I software stack, with improved testing and updated exercises now used in the workshop.
    However,...

    Go to contribution page
  617. Tobias Fitschen (The University of Manchester (GB))
    Track 2 - Online and real-time computing
    Poster Presentation

    Trigger bandwidth limitations constrain physics analyses that target low-mass resonances, where high-rate data collection is essential. To circumvent this limitation Trigger-Level Analysis (TLA) can be applied. A recent publication by the ATLAS experiment demonstrated this approach during LHC Run 2 by processing a massive dataset of over 60 billion events, more than twice the number of fully...

    Go to contribution page
  618. Matthias Schott (CERN / University of Mainz)
    Track 9 - Analysis software and workflows
    Poster Presentation

    Neural networks (NNs) are inherently multidimensional classifiers that learn complex, non-linear relationships among input observables. While their flexibility enables unprecedented performance in high-energy physics (HEP) analyses, it also makes them sensitive to small variations in their inputs. Consequently, the propagation and estimation of systematic uncertainties in NN-based models...

    Go to contribution page
  619. Mwai Karimi
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    Modern data ecosystems are increasingly heterogeneous, with data and metadata distributed across multiple databases, file systems, and external services. This fragmentation creates challenges for organising data, managing systems, and enabling efficient access. This poster presents an approach for unifying access to distributed data sources using PostgreSQL Foreign Data Wrappers (FDWs)....

    Go to contribution page
  620. Dr Marcus Ebert (University of Victoria)
    Track 1 - Data and metadata organization, management and access
    Poster Presentation

    We present an update on the usage of the Canadian Belle II raw data storage and computing infrastructure. The raw data storage system is a ZFS based system and the data access is managed by XRootD; without a WLCG accessible tape system. The system is in production for two years now and we will present our experience with such a system and how it was extended beyond the use as a raw data...

    Go to contribution page
  621. Prof. Qingmin Zhang
    Track 5 - Event generation and simulation
    Poster Presentation

    Geant4 is an object-oriented C++ toolkit widely used for simulating the passage of particles through matter, especially in nuclear physics research. However, its application requires a high level of programming proficiency, which often hinders broader adoption in scientific work. To lower the technical barriers associated with Geant4, we previously introduced a wizard-style GUI and modular...

    Go to contribution page
  622. Dmitriy Maximov
    Track 2 - Online and real-time computing
    Poster Presentation

    The KEDR experiment is ongoing at the VEPP-4M $e^{+}e^{-}$ collider at Budker INP in Novosibirsk. The collider center of mass energy range covers a wide spectrum from 2 to 11 GeV. Most of the up-to-date statistics were taken at the lower end of the energy range around the charmonia region. Activities at greater energies up to the bottomonia lead to a significant increase of event recording...

    Go to contribution page
  623. Mingrun Li
    Track 9 - Analysis software and workflows
    Poster Presentation

    Uproot-custom is an extension of the popular Python ROOT-IO library Uproot that offers a mechanism to enhance TTree data reading capabilities without relying on ROOT. It provides native support for reading more complex TTree data formats (such as deeply nested containers and memberwise-stored data), and a registration mechanism that allows users to customize reading logic to meet their...

    Go to contribution page
  624. Michal Svatos (Czech Academy of Sciences (CZ))
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    The distributed computing system of the ATLAS experiment at the Large Hadron Collider (LHC) uses resources from several EuroHPC facilities through both allocated and opportunistic access. HyperQueue, a meta-scheduler developed at IT4Innovations, the Czech National Supercomputing Center, enables the experiment's workload to be adapted to the many-core architecture typical of modern HPC systems....

    Go to contribution page
  625. Francisco Borges Aurindo Barros (CERN)
    Track 7 - Computing infrastructure and sustainability
    Poster Presentation

    For over a decade, content management systems at CERN have been served by the on-premise Drupal service. In response to the high maintenance requirements of Drupal, the growing adoption of WordPress and the need to improve user experience, site management and governance, the WordPress service was established. The WordPress service provides a managed platform designed to empower and support the...

    Go to contribution page