-
Thomas Owen James (CERN)Track 2 - Online and real-time computingPoster Presentation
The Compact Muon Solenoid (CMS) experiment at the CERN LHC has traditionally relied on a highly selective Level-1 trigger to reduce the 40 MHz LHC collision rate to a rate more manageable for data-reading and recording. This selection inherently limits access to event types with large irreducible backgrounds or with unconventional signatures. During LHC Run 3, CMS deployed a novel 40 MHz data...
Go to contribution page -
Kai Yi (Nanjing Normal University (CN))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
For many university-based HEP groups, the path to becoming a production-ready WLCG Tier-3 center can seem daunting, often constrained by limited budgets, small teams, and a steep learning curve for grid middleware. This poster presents the evolution of the NNU HEP Farm not just as a site report, but as a practical case study and blueprint for other groups embarking on a similar journey.
Go to contribution page
We... -
Sambit Sarkar (Tata Institute of Fundamental Research Mumbai)Track 3 - Offline data processingPoster Presentation
The GRAPES-3 experiment aims to study high-energy cosmic rays through their production mechanisms, propagation, and sources. Located in Ooty at an altitude of 2200 m, it spans an area of 25,000 m$^2$ and comprises about 400 plastic scintillator detectors (SDs) arranged with 8 m spacing to measure the charged component of extensive air showers, along with a dedicated muon detector consisting of...
Go to contribution page -
Sandro Christian Wenzel (CERN)Track 4 - Distributed computingPoster Presentation
ALICE has undergone a substantial software transformation from Run 2 to Run 3, embracing a message-passing, distributed-computing paradigm that unifies online and offline processing. Building on this shift, we present the Monte Carlo (MC) production framework developed within the O2DPG environment, which orchestrates full Run 3 and Run 4 simulation workflows across the heterogeneous computing...
Go to contribution page -
Dr Brij Kishor Jashal (Rutherford Appleton Laboratory)Track 4 - Distributed computingPoster Presentation
From Probes to Policy: Harmonising ATLAS Resource Health Signals
The operational status of WLCG resources in ATLAS is determined through several parallel mechanisms: probe results and declared downtimes (Switcher), persistent failures in functional or performance tests (HammerCloud), and data transfer or storage exclusion conditions managed by distributed data management (DDM). ATLAS...
Go to contribution page -
Javier Prado Pico (Universidad de Oviedo (ES))Track 2 - Online and real-time computingPoster Presentation
In preparation for the High-Luminosity LHC, the CMS experiment is upgrading its Level-1 Trigger system to handle increased luminosity and pile-up. The new trigger system opens up a plethora of possibilities to detect non-conventional signatures such as those arising from long-lived particles (LLPs). In particular such LLPs may decay far away from the interaction point and decay to hadrons on...
Go to contribution page -
Maria Mateea Popescu (National University of Science and Technology POLITEHNICA Bucharest (RO))Track 4 - Distributed computingPoster Presentation
Authors: Maria-Mateea Popescu (CERN, maria.mateea.popescu@cern.ch),
Costin Grigoraș (CERN, costin.grigoras@cern.ch),
Cristian Mărgineanu (National University of Science and Technology Politehnica Bucharest, cristian.margineanu@stud.acs.upb.ro)
on behalf of the ALICE collaborationMonALISA serves as the monitoring backbone for the distributed computing infrastructure of the ALICE...
Go to contribution page -
Dr Hao-Kai Sun (IHEP, CAS)Track 9 - Analysis software and workflowsPoster Presentation
With the advent of 4th-generation photon sources, the diversity and volume of data from multi-disciplinary beamlines present practical challenges for efficient analysis. This presentation introduces a modular workflow management system designed to streamline data processing pipelines. Our work focuses on: (1) a hierarchical encapsulation mechanism to help beamline scientists and users share...
Go to contribution page -
Ben Jones (CERN)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
CERN manages over 10,000 Windows devices – from simulation-heavy workstations to security-hardened desktops and servers critical for accelerator controls. For two decades, this was done with CMF, the CERN-built device management solution. Today, we are gradually moving to mainstream solutions such as Microsoft Intune and Configuration Manager, aiming to leverage industry standard off-the-shelf...
Go to contribution page -
Savva Savenkov (INR RAS, MIPT(NRU))Track 6 - Software environment and maintainabilityPoster Presentation
The integration of diverse high-energy collision Monte Carlo models into a unified simulation workflow is usually time-consuming. This is primarily because these models are conventionally developed as monolithic applications with heterogeneous data input and output formats. As a result, a need for multiple converters and auxiliary scripts arises, which not only impedes the modeling process but...
Go to contribution page -
Daniele Spiga, Diego Ciangottini (INFN, Perugia (IT)), Francesco Brivio (Universita & INFN, Milano-Bicocca (IT)), Giulio Bianchini (Universita e INFN, Perugia (IT)), Massimo Sgaravatto (Universita e INFN, Padova (IT)), Mirko Mariotti (Universita e INFN, Perugia (IT)), Paolo Dini, Simone Gennai (Universita & INFN, Milano-Bicocca (IT))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
In the last years, INFN has consolidated and expanded its distributed computing infrastructure toward heterogeneous hardware systems, also thanks to the ICSC and TeraBIT projects, funded in the context of the Italian National Recovery and Resilience Plan. Among the most innovative components of the national federation are the specialized hardware clusters known as HPC Bubbles, in particular...
Go to contribution page -
Lauren Meryl Hay (SUNY Buffalo), Rishabh Jain (Brown University (US))Track 5 - Event generation and simulationPoster Presentation
Validating that a full phase-space reweighting of a Monte Carlo prediction preserves the physical fidelity of the underlying model can be challenging, and often relies on comparisons to marginalized 1D histograms of kinematic variables that can mask subtle biases of the original high-dimensional unbinned prediction. In this poster, we present a novel, unbinned approach to comparing the...
Go to contribution page -
Oxana Smirnova, Oxana Smirnova (Lund University)Track 9 - Analysis software and workflowsPoster Presentation
We present a prototype implementation of a particle physics analysis workflow using Snakemake for an ATLAS anomaly detection search. Snakemake provides a flexible and scalable workflow for managing thousands of jobs with complex dependencies, supporting execution both locally and across different HPC environments. The workflow cleanly separates small-scale tasks, such as plotting, histogram...
Go to contribution page -
Dr Wenshuai Wang (Institute of High Energy Physics)Track 4 - Distributed computingPoster Presentation
High-energy physics experiments typically involve a large number of computing jobs and generate massive volumes of data. When users submit numerous jobs and produce substantial datasets, they often face challenges such as monitoring the status of multiple jobs and conducting statistical analysis on the data. To address these issues, we have developed a web-based job and data management...
Go to contribution page -
Minh-Tuan Pham (University of Wisconsin Madison (US))Track 3 - Offline data processingPoster Presentation
Charged-particle track reconstruction is an important part of modern collider experiments such as ATLAS and CMS that will face challenging conditions in the future High Luminosity phase of the LHC due to high pile-up. The increasing time and compute costs associated with the current tracking algorithm have spurred the development of machine learning (ML) alternatives to high degrees of...
Go to contribution page -
Anwar IbrahimTrack 5 - Event generation and simulationPoster Presentation
Detailed simulation of particle interactions in calorimeters represents a major computational bottleneck for high-energy physics experiments, particularly in the upcoming High-Luminosity LHC (HL-LHC) era. While Generative Adversarial Networks (e.g., CaloGAN) have demonstrated the potential of ML-based fast simulation, they often suffer from mode collapse and limited precision in modeling...
Go to contribution page -
Ricardo Rocha (CERN)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
The increasing use of GPUs and accelerator-based computing for simulation, reconstruction and machine learning has significantly expanded scientific capabilities in HEP. However, these workloads also introduce new challenges in terms of energy consumption, operational cost and overall carbon footprint, especially as computing demand grows with future experiments.
This contribution presents...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)Track 1 - Data and metadata organization, management and accessPoster Presentation
The IceCube Neutrino Observatory has accumulated over 15 years of science data, with more years to come. This data has previously been archived in a distributed setup according to accessibility needs and processing level. Trigger-level data is stored at NERSC’s tape system for “online” storage and on physical hard drives stored in a climate controlled room on a shelf for “offline” storage at...
Go to contribution page -
Stefan Krischer (RWTH Aachen University)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
As global greenhouse gas emissions continue to rise, a significant share originates from growing resource consumption in research environments. Beyond energy use, this includes human resources, infrastructure, equipment, and material life cycles. In research on Universe and Matter, the increasing reliance on large-scale infrastructures and complex digital workflows further amplifies this...
Go to contribution page -
Pawel Kopciewicz (CERN)Track 6 - Software environment and maintainabilityPoster Presentation
We present a suite of applications for an agentic chatbot to enhance workflows in the LHCb Real-Time Analysis (RTA). The first presented use case allows experiment operators to request, via natural language on the Mattermost platform, the automated generation of monitoring plots—such as trigger rate or detector temperature versus time—from live or historical subsystem data. This functionality...
Go to contribution page -
Dr Peng Hu (Institute of High Energy Physics, Chinese Academy of Sciences)Track 1 - Data and metadata organization, management and accessPoster Presentation
In large-scale scientific research, experimental data faces high acquisition costs and a shortage of high-quality data, while a significant amount of critical data is scattered in unstructured forms across various scientific literature. To address this issue, this study proposes an artificial intelligence framework for constructing high-quality knowledge bases from literature corpora and its...
Go to contribution page -
Alexey Rybalchenko (G)Track 6 - Software environment and maintainabilityPoster Presentation
Large Language Models (LLMs) are transforming software development and data analysis workflows in many fields, including nuclear and particle physics experiments.
Go to contribution page
However, deploying LLMs in production research environments requires careful attention to scalability, security, and resource efficiency.
This work presents a versatile production-grade LLM inference and document intelligence... -
Pierfrancesco Cifra (CERN)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
In the exabyte era, physical science research infrastructures will have to deal with massive quantities of raw data by relying on large heterogeneous computing facilities. In the LHCb context, the ODISSEE project aims to maximize the computational performance and reliability of those systems while reducing the required energy and the total cost of ownership by using AI tools and techniques. By...
Go to contribution page -
Rosa Petrini (Universita e INFN, Firenze (IT))Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
Over the past decade, Machine Learning and Artificial Intelligence technologies have evolved at an extraordinary pace, making collaboration among geographically distributed experts and students more critical than ever.
Go to contribution page
The AI_INFN Platform is designed to play a key role in providing access to hardware accelerators for research communities in both fundamental and applied physics.
The platform... -
Xuantong Zhang (Institute of High Enegry Physics, Chinese Academy of Sciences (CN)), Dr Yujiang BI (Institute of High Energy Physics, Chinese Academy of Sciences)Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
With the emergence and continuous evolution of various new development and analysis tools like Jupyter and VSCode, the demand for interactive data analysis has been steadily increasing, leading to significant changes in the traditional high-energy physics data analysis workflow.
Go to contribution page
To meet the growing and evolving needs of high-energy physics users in data analysis and processing, an all-in-one... -
Mr 点 刘 (高能物理研究所)Track 9 - Analysis software and workflowsPoster Presentation
Large scientific facilities such as the High Energy Photon Source (HEPS) generate massive volumes of heterogeneous experimental data during operation. These data exhibit remarkable diversity in terms of scale, structure, and distribution characteristics, imposing extremely high requirements on the real-time response capability and long-term archive storage efficiency of data processing...
Go to contribution page -
Hong WangTrack 9 - Analysis software and workflowsPoster Presentation
High-energy physics experiments such as BESIII produce large volumes of event-level data stored in ROOT-based formats and represented by collections of particle tracks and associated information. While these data are fundamental to physics analyses, their highly structured representations are not directly compatible with modern large language models (LLMs) and AI-driven reasoning systems....
Go to contribution page -
Shiyuan Li (Nanyang Normal University)Track 9 - Analysis software and workflowsPoster Presentation
Space astronomy satellites serve as critical infrastructure in the field of astrophysics, and data processing is one of the most essential processes for conducting scientific research. The Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences has undertaken the development and construction of multiple space astronomy satellites, including HXMT, GECAM, SVOM, eXTP and CATCH....
Go to contribution page -
Christopher Barnes (IT-CD-CC)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
CERN operates a large and distributed computing environment in which provisioning, configuration, and operational state are handled by different systems. Since 2012, the IT department has invested heavily in bridging these areas under the Agile Infrastructure project. Open-source projects such as OpenStack, Puppet, and Foreman have been integrated with in-house services to offer a cohesive...
Go to contribution page -
Torri Jeske (Jefferson Lab)Track 2 - Online and real-time computingPoster Presentation
At Jefferson Lab, the CEBAF Online Data Acquisition (CODA) kit and the commonly used front end electronics modules are recently upgraded to support streaming readout data acquisition (DAQ). Depending on the use case, the streaming DAQ data may consist primarily of empty time frames during cosmic runs, or it may be dominated by background signals. A tool kit that applies user-defined online...
Go to contribution page -
Christian Voss, Marina Sahakyan, Mr Tigran Mkrtchyan (DESY)Track 1 - Data and metadata organization, management and accessPoster Presentation
The dCache project provides an open-source, highly scalable distributed storage system deployed at numerous laboratories worldwide. Its modular architecture supports high-rate data ingestion, WAN data distribution, efficient HPC access, and long-term archival storage. Although initially developed for high-energy physics, dCache now serves a broad range of scientific communities with diverse...
Go to contribution page -
Minghua Liao (Sun Yat-Sen University (CN))Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
Visualization tools are used to display the detector geometry and information of event hits. It plays an important role in physics analysis, data quality monitoring, algorithm optimization, physics education, and public outreach. Unity, as a powerful game engine, exhibits advantages such as high-performance rendering, multi-platform support, and rich tools and features, making it suitable for...
Go to contribution page -
Mr George Raduta (CERN)Track 1 - Data and metadata organization, management and accessPoster Presentation
The Bookkeeping application is the central logbook and state-tracking system of the ALICE experiment at CERN’s Large Hadron Collider, serving detector operations, data taking, and analysis workflows across Run 3, and the forthcoming Long Shutdown 3 (LS3) and Run 4. While its initial design addressed requirements anticipated before Run 3, operational experience and extended use by both...
Go to contribution page -
Ujval Madhu (Research Engineer)Track 1 - Data and metadata organization, management and accessPoster Presentation
High-performance data management systems are foundational to modern scientific facilities, particularly in high-energy physics (HEP) and nuclear physics (NP) where experiments generate massive datasets. The Large Hadron Collider produces 5 petabytes daily, while the High-Luminosity LHC upgrade will require 10× greater capacity by 2030. Individual experiments document their solutions, yet...
Go to contribution page -
Lorenzo Rinaldi (Universita e INFN, Bologna (IT))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
The ICSC national research center was established as part of the Italian National Recovery and Resilience Plan (PNRR) with the aim of strengthening scientific research and technological innovation in the fields of supercomputing, big data, and quantum computing. This contribution presents the main activities conducted by the Italian community of ATLAS experiment within the ICSC project,...
Go to contribution page -
Tarik OuridaTrack 2 - Online and real-time computingPoster Presentation
Standard Level-1 trigger algorithms treat collision events as statistically independent, a design choice that simplifies implementation but prevents models from leveraging short term variations in detector performance. These fluctuations can transiently distort reconstructed features and weaken the stability of fast classification algorithms. To address this limitation, we introduce Context...
Go to contribution page -
Theodoros Chatzistavrou (National Technical Univ. of Athens (GR))Track 2 - Online and real-time computingPoster Presentation
The LHC experiments have so far calibrated and re-reconstructed data typically years after the end of data-taking to make them usable for precision physics analyses, costing millions of CPU hours. This approach becomes untenable at the HL-LHC, with 10 times larger datasets. Jet energy corrections (JEC) are among the dominant sources of systematic uncertainty in many physics analyses and...
Go to contribution page -
Sebastian Wozniewski (Georg August Universitaet Goettingen (DE))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
On batch systems with many jobs sharing a worker node, the draining of a node in order to terminate it for operational purposes without job abortions leads to idle CPU cores and a loss of compute time. This is becoming a prominent issue at German university-based Tier-2 centres, in particular. Towards the High-Luminosity LHC, they are undergoing a transformation and CPU will be provided via...
Go to contribution page -
Gianmaria Del Monte (CERN)Track 1 - Data and metadata organization, management and accessPoster Presentation
The continuous growth in data volumes and diversification of access patterns in high-energy physics (HEP) are gaining attention in storage systems that offer both extreme performance and ease of use. To explore the potential of modern flash technologies for scientific workloads, we conducted a comprehensive benchmarking campaign for a PureStorage all-flash appliance, focusing on its...
Go to contribution page -
Anna Kravchenko (CERN), Felice Pantaleo (CERN)Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
The High-Luminosity LHC era is pushing experiments toward more complex software stacks, high throughput data processing, heterogeneous computing architectures, DAQ, and AI real-time decision making. To strengthen community capacity for next-generation trigger and data-processing systems, we present the CERN STEAM Academy: a 10-week, hands-on programme hosted at CERN developed within the Next...
Go to contribution page -
Valentin Volkl (CERN)Track 4 - Distributed computingPoster Presentation
The CernVM-Filesystem (CVMFS) is a global, read-only, on-demand filesystem optimized for software distribution. CVMFS is also a very efficient way of distributing container images and can be used with container runtimes such as Apptainer or Containerd to lazy-load images. The unpacked.cern.ch repository at CERN, a service that allows users to publish container images to CVMFS has become one of...
Go to contribution page -
Yao ZhangTrack 3 - Offline data processingPoster Presentation
Charged particle tracking is a critical task for physics analysis. In this work, we propose applying reinforcement learning (RL) for reconstructing particle trajectories in drift chambers. Our designed workflow uses the output of a graph neural network (GNN) as the observation for RL. Agent training employs a reward metric derived from Monte Carlo truth information, with the objective of...
Go to contribution page -
Laurence Field (CERN)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
For over two decades, the LHC@home volunteer computing project has provided additional opportunistic computing capacity to support the scientific research conducted at CERN. With the retirement of the SixTrack application, the only natively executable one, there has been a significant reduction in job throughput. This paper highlights the difference in job throughput between the native and...
Go to contribution page -
CMS CollaborationTrack 9 - Analysis software and workflowsPoster Presentation
The CMS Collaboration has, for several years, relied on correctionlib as the central framework for producing, validating, and distributing analysis corrections in a unified and structured JSON-based format. Recent developments have significantly enhanced this framework. The deployment of correction files for all major physics objects has been fully automated through GitLab CI/CD workflows,...
Go to contribution page -
CMS CollaborationTrack 1 - Data and metadata organization, management and accessPoster Presentation
Bandwidth and storage limitations are a key bottleneck for many CMS physics measurements and searches. To mitigate these constraints, CMS has developed a set of techniques that increase the number of events written to disk while maintaining physics performance. These strategies remain an active area of development and are being further optimized for Phase-2.
One such technique, RawPrime,...
Go to contribution page -
Chan-anun Rungphitakchai (Chulalongkorn University (TH))Track 4 - Distributed computingPoster Presentation
The CMS collaboration operates a large distributed computing infrastructure to meet the computing requirements of the experiment. About half a million CPU cores and an exabyte of storage are utilized to reconstruct the recorded data, simulate signals of physics processes, and analyze data. Computing resources are located at about one hundred sites around the world.
Monitoring the...
Go to contribution page -
CMS CollaborationTrack 3 - Offline data processingPoster Presentation
CMS is transitioning to use ROOT’s new RNTuple data storage format for the files CMS will write in the HL-LHC era. Based on initial tests, CMS expects faster I/O and smaller files compared to the present TTree storage format. This contribution will show a comprehensive performance comparison between RNTuple and TTree I/O using CMS AOD and MiniAOD data formats as test cases for both simulation...
Go to contribution page -
Maksym Naumchyk (Princeton University (US))Track 6 - Software environment and maintainabilityPoster Presentation
This presentation is about my recent project as an IRIS-HEP fellow. I was working on improving the Coffea 'schemas' by simplifying how they work internally. It eventually transitioned into making a new package that would include all the simplified schemas, separated from Coffea. Eventually Coffea will use them instead of its old schemas. This new package was given the name Zipper and...
Go to contribution page -
Dr Santiago Gonzalez De La Hoz (Univ. of Valencia and CSIC (ES))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
This work presents the consolidated contributions of the Spanish Tier-1 and Tier-2 centers to the computing infrastructure of the ATLAS experiment at the LHC. As of September 2025, our focus spans the final phase of Run 3, the ongoing preparations for the Long Shutdown 3 (LS3), and the strategic planning for the High-Luminosity LHC (HL-LHC) era. Our GRID infrastructure is continuously being...
Go to contribution page -
Jogi Suda Neto (University of Alabama (US))Track 3 - Offline data processingPoster Presentation
The underlying likelihood of a given event originating from a partonic-level process is known to be approximately invariant under the Lorentz group. We find that quantum neural networks equivariant under such continuous symmetries exhibit improved generalization, sample and training time complexity. We show that this property is induced by the number of distinct group orbits in the data, with...
Go to contribution page -
Eric Lancon (Brookhaven National Laboratory (US))Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
The Collaborative Research Information Sharing Platform (CRISP) provides an integrated system for managing scientific collaboration workflows, documentation, and institutional knowledge for the future Electron Ion Collider (EIC). CRISP is designed to address practical challenges in coordinating activities across a multi-thousand-user community distributed international collaboration, using a...
Go to contribution page -
Jize Yang (Sun Yat-Sen University)Track 2 - Online and real-time computingPoster Presentation
Jiangmen Underground Neutrino Observatory (JUNO) is a large neutrino experiment located in southern China, aiming at determining neutrino mass ordering, as well as other neutrino physics topics. JUNO has completed detector commission and started data taking since Aug. 22, 2025. Data quality monitoring (DQM) system is critical for data taking,data quality control, and data analysis in any high...
Go to contribution page -
Robin HofsaessTrack 7 - Computing infrastructure and sustainabilityPoster Presentation
With this contribution, a data-driven method for the performance comparison of Grid sites is presented.
Go to contribution page
While the WLCG sites with an MoU typically report their performance in HS23, opportunistic sites, such as HPC or Tier-3 centers, usually don't.
For the comparison of opportunistically used HPC clusters in Germany, a method was developed to asses the performance of these sites based on CMS... -
Mr Andrea PaccagnellaTrack 3 - Offline data processingPoster Presentation
The LHCf experiment measures forward neutral particle production at the LHC, providing key inputs for the tuning of hadronic interaction models used in ultra-high-energy cosmic ray physics. The reconstruction of multi-photon final states in forward experimets represents a challenging offline computing problem, due to overlapping showers, non-uniform detector response, and strong correlations...
Go to contribution page -
Dr Mateusz Zarucki (CERN)Track 2 - Online and real-time computingPoster Presentation
The Next Generation Triggers (NGT) R3 (Real-time Reconstruction Revolution) project in CMS aims to rethink the experiment’s data acquisition system, allowing its physics programme to process all collisions accepted by the Level-1 hardware-based trigger system (L1T), in view of the Phase-2 upgrade for the HL-LHC. Its main objective is to expand the High-Level Trigger (HLT) data scouting...
Go to contribution page -
Caterina Marcon (Università degli Studi e INFN Milano (IT)), David Rebatto (Università degli Studi e INFN Milano (IT))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
INFN manages the DATAcloud infrastructure, a federated and scalable network of cloud computing sites. Within the framework of the ICSC project (National Research Center in High-Performance Computing, Big Data, and Quantum Computing), funded by the Italian National Recovery and Resilience Plan (PNRR), research and development activities are carried out to foster innovation in high-performance...
Go to contribution page -
Wojciech Krupa (CERN)Track 5 - Event generation and simulationPoster Presentation
The Gauss software is the main simulation framework in LHCb and handles both the event generation step and the tracking of particles through the detector material. Gauss has recently been restructured as a thin LHCb-specific software layer above an experiment-independent HEP simulation framework (Gaussino). In this talk we report on the steps that were taken toward the deployement and...
Go to contribution page -
Giovanni Zago (Universita e INFN, Padova (IT))Track 2 - Online and real-time computingPoster Presentation
The Level-1 Data Scouting (L1DS) system introduces a new real-time data acquisition and processing path in CMS that captures information reconstructed by the Level-1 Trigger at the full 40 MHz collision rate, without any preselection. For the HL-LHC era, the Level-1 Trigger will undergo a major architectural evolution, delivering significantly richer and higher-quality reconstructed physics...
Go to contribution page -
Anurag Sritharan (Deutsches Elektronen-Synchrotron (DE))Track 6 - Software environment and maintainabilityPoster Presentation
The CMS experiment will upgrade its detectors to cope with higher luminosities and collision rates during the High-Luminosity era of the LHC. One key upgrade of the CMS is the High Granularity Calorimeter (HGCAL), which will completely replace the current end-cap calorimeter. The hadronic calorimeter is split into two sections using different technologies, depending on the expected amount of...
Go to contribution page -
Yuning Su (Sun Yat-Sen University (CN))Track 5 - Event generation and simulationPoster Presentation
Detector identifier and geometry management system plays important role in the offline software of every nuclear and particle physics experiment. Jiangmen Underground Neutrino Observatory~(JUNO), a large neutrino experiment starting design in 2013, has completed detector construction and begins data taking in 2025. We will describe the design and implementation of the JUNO detector identifier...
Go to contribution page -
Carlos Brito (Federal University of Rio de Janeiro (BR))Track 6 - Software environment and maintainabilityPoster Presentation
Developing systems with reusability in mind is often a challenge. Even when a common context for system deployment is identified, some groundwork is required before it can be adopted by different teams. The Glance project at CERN addresses this challenge by implementing modular development and reuse across over 20 systems spanning four experiments: ALICE, ATLAS, CMS and LHCb. Originally...
Go to contribution page -
Xuesen WangTrack 8 - Analysis infrastructure, outreach and educationPoster Presentation
Taishan Anti-neutrino Observatory (TAO) is a satellite experiment of Jiangmen Underground Neutrino Observatory (JUNO). It is located near the Taishan nuclear power plant (NPP) to monitor the neutrinos emitted from the NPP.
Go to contribution page
Event display is a critical tool in High Energy Physics (HEP) experiments. It helps monitoring data taking, data quality control, event simulation, reconstruction, and... -
Mike Clymer (for the DUNE Collaboration)Track 1 - Data and metadata organization, management and accessPoster Presentation
DUNE is a next-generation neutrino oscillation experiment. During its decades-long operational lifetime, it is expected that many exabytes of data will be collected. It is critical that this data be correctly characterized with respect to its associated conditions metadata – the non-event data used to process event data during reconstruction and analysis. To meet the operational scale and...
Go to contribution page -
Mohamed Aly (Princeton University (US))Track 9 - Analysis software and workflowsPoster Presentation
The JAX framework provides automatic differentiation, JIT compilation, vectorization, and multi-hardware acceleration well-suited for statistical inference in HEP. In this contribution, we present an ecosystem of interoperable tools that leverage the power of JAX, with a focus on everwillow, an inference tool agnostic to the underlying statistical model. At the modelling layer of this...
Go to contribution page -
Anwar IbrahimTrack 5 - Event generation and simulationPoster Presentation
In this work, we investigate diffusion-based generative models as a fast simulation alternative for modeling detector response on the example of the electromagnetic calorimeter response for the LHCb experiment. We consider both classical denoising diffusion probabilistic models with Gaussian noise and their extension based on Gamma-distributed noise, which is expected to be better suited for...
Go to contribution page -
Mr Andrey Shevel (Petersburg Nuclear Physics Institute named by B.P. Konstantinov of National Research Centre «Kurchatov Institute» (NRC «Kurchatov Institute» - PNPI))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
Traditional server network monitoring relies on specialized tools and complex queries, demanding significant domain expertise and being time-consuming. We propose a Digital Twin (DT) framework that provides a real-time, unified model of network behavior, enabling intuitive natural-language interactions powered by large language models (LLMs).
Go to contribution page
The DT fuses live telemetry from monitoring... -
CMS CollaborationTrack 4 - Distributed computingPoster Presentation
The CMS Submission Infrastructure (SI) provisions and orchestrates the compute resources used for CMS data processing, simulation, and analysis. While the SI has reliably supported Run-3 operations at scales of several hundred thousand concurrent jobs across Grid, HPC, and cloud sites, the computational demands of the HL-LHC era require a substantially more scalable and robust system. To...
Go to contribution page -
Jacob Calcutt (Brookhaven National Laboratory (US))Track 1 - Data and metadata organization, management and accessPoster Presentation
The DUNE collaboration has an ongoing production effort to simulate the full detectors and to analyze the various prototypes that are currently running. Rucio is used to manage the 40PB of files made to date. When 500 or more jobs were sending output to Rucio simultaneously via Rucio upload, we observed timeouts, unhandled exceptions, and Rucio server restarts due to slow performance. In...
Go to contribution page -
Jan de Cuveland (Goethe University Frankfurt (DE))Track 2 - Online and real-time computingPoster Presentation
The CBM experiment at GSI/FAIR will investigate QCD matter at high baryon densities with a free-streaming, self-triggered detector readout delivering time-stamped data on approximately 5000 input links. Designed for aggregate data rates exceeding 1 TB/s, the First-level Event Selector (FLES) system performs timeslice building, aggregating these streams into overlapping processing intervals for...
Go to contribution page -
Dr Giordon Holtsberg Stark (University of California,Santa Cruz (US))Track 9 - Analysis software and workflowsPoster Presentation
Statistical modeling is central to discovery in particle physics, yet the tools commonly used to define, share, and evaluate these models are often complex, fragmented, or tightly coupled to legacy systems. In parallel, the scientific Python community has developed a variety of statistical modeling tools that have been widely adopted for their performance and ease of use, but remain...
Go to contribution page -
Pablo Saiz (CERN)Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
Diversity awareness requires that we provide all CERN-made multi-media content with subtitles and make them fully searchable, addressing in particular the needs of persons with impairments and speakers of foreign languages. The goal of the “Transcription and Translation as a Service” (TTaas) software [1] is to deliver a performant, privacy-preserving and cost-efficient Automated Speech...
Go to contribution page -
Nikita Chalyi (Tomsk State University (TSU))Track 5 - Event generation and simulationPoster Presentation
In this work, we describe enhancements to the hadronic de-excitation models implemented in the Geant4 toolkit. We extend the comprehensive and independent validation system for these models, covering a wide range of tests in the moderate energy region, from the reaction threshold to 3 GeV. The underlying processes have a defining impact on the formation of hadronic showers and the resulting...
Go to contribution page -
Dr Maximilian Horzela (Georg August Universitaet Goettingen (DE))Track 1 - Data and metadata organization, management and accessPoster Presentation
The future Inner Tracker (ITk) of the ATLAS experiment will replace the current Inner Detector to maintain excellent tracking and vertexing performance under the challenging conditions of the High-Luminosity LHC (HL-LHC). It must withstand significantly increased radiation levels and occupancy while handling higher data rates and extending forward coverage. At the same time, with more than 150...
Go to contribution page -
Mario Rey Regulez (CERN)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
For several years, CERN has provided access to Windows remote desktops through its Windows Terminal Infrastructure service. As the need for stronger security measures grew, particularly around connections using Microsoft’s Remote Desktop Protocol, we began exploring ways to integrate Two-Factor Authentication (2FA) into this critical service. This presented unique challenges in CERN’s academic...
Go to contribution page -
Woohyeon Heo (University of Seoul, Department of Physics (KR))Track 2 - Online and real-time computingPoster Presentation
The ME0 Gas Electron Multiplier (GEM) detector systems will be installed for the phase-2 upgrade of the Compact Muon Solenoid (CMS) experiment in the Large Hadron Collider (LHC). The ME0 detectors, located in each endcap of the muon system, are the only muon detectors that cover the range 2.4 < |eta| < 2.8. Due to the high background environment, keeping the trigger rate low while maintaining...
Go to contribution page -
Leonardo Mira Marins (Federal University of Rio de Janeiro (BR))Track 6 - Software environment and maintainabilityPoster Presentation
The European Organization for Nuclear Research (CERN), home to the Large Hadron Collider, hosts one of the world’s largest particle physics experiments, the ATLAS experiment. To effectively support administration, workflow management, and scientific communication within ATLAS, the Glance project was established in 2003 to provide web-based automated solutions for membership, analysis tracking,...
Go to contribution page -
Tyler Anderson (LBNL)Track 1 - Data and metadata organization, management and accessPoster Presentation
Long-running high energy physics experiments often depend on legacy architectures for orchestrating their data. While these custom tools can be effective, the expertise to maintain them is often concentrated in limited personnel, which raises concerns of software sustainability and long-term maintenance. Transitioning to a community-supported standard like Rucio, created at CERN, offers a...
Go to contribution page -
Ting-Hsiang Hsu (National Taiwan University (TW))Track 9 - Analysis software and workflowsPoster Presentation
Foundation models are large neural networks pretrained on vast datasets and adapted to many downstream tasks with minimal task-specific training. In high-energy physics, precise Monte Carlo event generators allow the simulation of billions of events, but the enormous space of beyond-Standard-Model scenarios makes training specialized large models for each analysis computationally impractical....
Go to contribution page -
Jack Charlie MundayTrack 7 - Computing infrastructure and sustainabilityPoster Presentation
The Kubernetes platform operated by CERN IT has supported scientific computing, online services and accelerator controls since 2016. It enables fully automated deployment and management of clusters with native integration to CERN storage systems (CVMFS, EOS, AFS, CEPH), authentication (SSO, Kerberos) and networking. Today the service spans more than 600 clusters across CERN’s two main...
Go to contribution page -
Mario Gonzalez (CERN)Track 6 - Software environment and maintainabilityPoster Presentation
The CMS experiment relies on a complex software ecosystem for detector simulation, event reconstruction, and physics analysis. As data rates and detector complexity continue to rise, scaling this software efficiently across distributed resources has become essential. We present the extension of the CMS Software (CMSSW) into a fully distributed application, enabling a single logical workflow to...
Go to contribution page -
Marco Verlato (INFN, Padova (IT))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
CloudVeneto is a private cloud targeted to scientific communities, based on OpenStack software, designed in 2013 to support INFN projects, initially mostly Nuclear Physics and HEP ones. During the last 12 years it evolved by integrating resources and use cases of several Departments of the University of Padova. It currently supports several scientific disciplines of different domains, but it...
Go to contribution page -
Jade Chismar (UC San Diego)Track 2 - Online and real-time computingPoster Presentation
The upgrade of the Large Hadron Collider (LHC) to the High-Luminosity LHC (HL-LHC) will increase the number of proton-proton collisions by several-fold, and thus place a large demand on computing resources for charged particle tracking. The Line Segment Tracking (LST) algorithm is a novel, highly parallelizable algorithm that can run efficiently on GPUs and has been integrated into the CMS...
Go to contribution page -
Florian Uhlig (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))Track 9 - Analysis software and workflowsPoster Presentation
FairRoot is a framework for simulation, reconstruction, and analysis of nuclear and high energy physics experiments. It provides the necessary building blocks that allow users to easily implement their specific experimental setup. Originally started as a project at GSI focused on a specific experiment, FairRoot has evolved into a widely used platform by various experiments worldwide,...
Go to contribution page -
Ruben Lopez Ruiz (Universidad de Cantabria and CSIC (ES)), Celia Fernandez Madrazo (Boston University (US)), Sergio Sanchez Cruz (Universidad de Oviedo (ES)), Lara Lloret Iglesias (Universidad de Cantabria and CSIC (ES)), Pablo Martinez Ruiz Del Arbol (Universidad de Cantabria and CSIC (ES))Track 5 - Event generation and simulationPoster Presentation
Muography is an emergent non-destructive testing technique that uses cosmic muons to probe the interior of objects and structures. This technique can be employed to perform preventive maintenance of critical equipment in the industry in order to test the structural integrity of the facility. Several muography imaging algorithms based on machine learning methods are being developed in the...
Go to contribution page -
Dr Alexey BoldyrevTrack 5 - Event generation and simulationPoster Presentation
Focusing Aerogel Ring Imaging CHerenkov detector (FARICH) is a promising particle identification technology for the SPD expertiment. A free-running (triggerless) data acquisition pipeline to be employed in the SPD results in a high data rate necessitating new approaches to event generation and simulation of detector responses. In this work, we propose several machine learning based approaches...
Go to contribution page -
Mr Andrey Kirianov (A.Alikhanyan National Science Laboratory (AM))Track 4 - Distributed computingPoster Presentation
The Spin Physics Detector (SPD), currently under construction at the NICA complex at JINR, is expected to generate large volumes of data. It is therefore assumed that at least some members of the SPD Collaboration will contribute significant computing and storage resources. Unlike in large-scale grids, the number of participating sites is not so large and most of them will be located in Russia...
Go to contribution page -
Berk Balci (CERN), Francesco Giacomini (INFN CNAF)Track 4 - Distributed computingPoster Presentation
INDIGO IAM is a central Identity and Access Management service for distributed research infrastructures, supporting authentication and authorization at scale. As the number of relying services and users continues to grow, improving the performance and efficiency of IAM operations has become a key objective. One of the most significant performance bottlenecks identified in the current...
Go to contribution page -
Yipu Liao (Institute of High Energy Physics, CAS, Beijing)Track 3 - Offline data processingPoster Presentation
Denoising and track reconstruction in drift chambers fundamental to particle identification and momentum measurement at electron-positron colliders. While Transformer architectures have revolutionized many sequence-processing domains, their potential for track reconstruction in high-energy physics has not been fully explored. In this work, we introduce Transformer-based methods at two stages...
Go to contribution page -
266. From In-House Tools to HashiCorp Vault: CERN's Transition to Modern Scalable Secrets ManagementNivedita Prasad, Zhechka Toteva (CERN)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
CERN's computing infrastructure manages thousands of services across a complex distributed environment, requiring robust secret management for application credentials, root accounts, certificates, and service tokens. This paper explores CERN's transition from puppet-oriented, in-house secrets management solutions to HashiCorp Vault as a centralized, enterprise-level secret management...
Go to contribution page -
Dr Naomi Jarvis (Carnegie Mellon University)Track 3 - Offline data processingPoster Presentation
GlueX is a hadronic physics photoproduction experiment based at Jefferson Lab. The GlueX spectrometer and beamline detectors include over a dozen individual detectors whose performance and calibrations are mostly independent. During data collection, the data are divided into a series of runs, lasting up to 2 hours each, with the run boundaries acting as calibration boundaries. Data quality...
Go to contribution page -
Tadej Novak (Jozef Stefan Institute (SI))Track 5 - Event generation and simulationPoster Presentation
Simulating physics processes and detector responses is essential in high energy physics and represents significant computing costs. Generative machine learning has been demonstrated to be potentially powerful in accelerating simulations, outperforming traditional fast simulation methods. The efforts have focused primarily on calorimeters.
This contribution presents the very first studies on...
Go to contribution page -
Rosa Petrini (Universita e INFN, Firenze (IT))Track 5 - Event generation and simulationPoster Presentation
Diamond detectors with laser-graphitized electrodes orthogonal to the surface are emerging as fast, full-carbon sensors for applications ranging from High Energy Physics to Nuclear Medicine. Recent advances in low-resistance electrode fabrication have enabled sub-100 ps timing performance. However, accurately modeling signal formation remains challenging due to the intertwined effects of...
Go to contribution page -
Minh-Tuan Pham (University of Wisconsin Madison (US))Track 3 - Offline data processingPoster Presentation
Reconstructing particle trajectories is a significant challenge in most particle physics experiments and a major consumer of CPU resources. It can typically be divided into three steps: seeding, track finding, and track fitting. Seeding involves identifying potential trajectory candidates, while track finding entails associating detected hits with the corresponding particle. Finally, track...
Go to contribution page -
Aleksandr Svetlichnyi (INR RAS, MIPT(NRU))Track 5 - Event generation and simulationPoster Presentation
Relativistic heavy-ion collisions serve as a primary tool for investigating the fundamental properties of matter under extreme conditions. The theoretical modeling of these interactions relies on various computational models whose predictive power often fluctuates across different kinematic ranges and physical observables. Furthermore, the underlying complex phenomenological chains are...
Go to contribution page -
Yaosong Cheng (Institute of High Energy Physics Chinese Academy of Sciences, IHEP)Track 1 - Data and metadata organization, management and accessPoster Presentation
China's High Energy Photon Source (HEPS) will complete facility construction and commissioning by the end of 2025. Data acquisition and analysis have already begun. The 14 beamlines of the first phase of HEPS will generate approximately 300PB of raw data annually, with further expansion expected in the future. This not only poses significant challenges for the reliability and read-write...
Go to contribution page -
CMS CollaborationTrack 4 - Distributed computingPoster Presentation
Sciences such as High Energy Physics, Computational Biology, and other communities use distributed computing facilities to find the solutions to problems that require the execution of computationally intensive algorithms. The Open Science Grid (OSG) enables access to over 100 individual compute clusters spanning the globe for scientists from these disciplines. These sites, primarily at...
Go to contribution page -
Manfred Peter Fackeldey (Princeton University (US))Track 9 - Analysis software and workflowsPoster Presentation
Modern analyses in high-energy physics (HEP) have high memory requirements due to the sheer volume of data collected in experiments at the Large Hadron Collider (LHC) at CERN.
Go to contribution page
Awkward Array recently released a new version of lazy arrays (“virtual arrays”) that mitigates this problem by loading only the columns required for HEP analysis. Nevertheless, these columns can still add up in size,... -
CMS CollaborationTrack 2 - Online and real-time computingPoster Presentation
Our world has witnessed a massive explosion of data and a surge of machine learning (ML) and AI applications. The result is an ever-increasing need for higher throughput and real-time computing capabilities. The Large Hadron Collider (LHC) and its experiments provide the perfect benchmark to bring the recent industry developments and explore beyond-the-state-of-the-art technologies to process...
Go to contribution page -
Rafaella Lenzi Romano (Federal University of Rio de Janeiro (BR))Track 6 - Software environment and maintainabilityPoster Presentation
The ATLAS experiment involves over 6,000 members, including students, physicists, engineers, and researchers. This dynamic CERN environment brings up some challenges, such as information centralisation, communication, and the continuity of workflows. To overcome these challenges, the ATLAS Glance Team has developed and maintained several automated systems that rely on CERN’s Group Management...
Go to contribution page -
David Schultz (University of Wisconsin-Madison)Track 4 - Distributed computingPoster Presentation
As part of the IceCube Neutrino Observatory's move to the Pelican Platform for data transfer, our production workflow management tools also needed to be updated. There were two major changes happening at the same time: moving from X.509 certificates to tokens, and gathering the tokens at the initial dataset submission rather than during the job processing. Some significant problems had to be...
Go to contribution page -
Jeremy Wilkinson (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
Jupyter is a powerful tool for data visualisation and interactive analysis with Python, and in particular JupyterHub offers a simplified way for users to run their workflows on dedicated HPC hardware. The use of JupyterHub is already widespread among many research centres and computing clusters. However, many of the existing deployments rely on specialised network setups such as a dedicated...
Go to contribution page -
Beatriz Moraes Vivacqua (Federal University of Rio de Janeiro (BR)), Pawel Kopciewicz (CERN)Track 2 - Online and real-time computingPoster Presentation
The LHCb experiment in Run 3 features a full software trigger: the GPU-based HLT1 with O(100) and the CPU-based HLT2 with O(4000) trigger lines. Human control of every aspect of data quality in a complex system of this scale is extremely difficult and requires a high degree of automation. IntelliRTA is a monitoring dashboard that provides a holistic view of the trigger lines and the data...
Go to contribution page -
Caley Luce Yardley (University of Sussex (GB))Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
In fields of research such as high-energy physics at the Large Hadron Collider (LHC), making the “big data” accessible to the public comes with its own set of challenges; traditional methods of public release put the onus on individuals to first acquire specific coding skills and may assume certain requirements on computing resources are met. This motivates the development of interactive and...
Go to contribution page -
Sacha Emile R Medaer (CERN)Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
High Energy Physics (HEP) experiments increasingly rely on large volumes of Monte Carlo (MC) simulation data to estimate radiation levels and activation scenarios. Within the LHCb collaboration, we present a new system developed to simplify the management and exploration of such MC simulation outputs as obtained with the FLUKA code: the Analysis platform for Radiation Environment Simulations...
Go to contribution page -
Dr Geonmo Ryu (Korea Institute of Science & Technology Information (KR))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
In small-scale scientific infrastructures typically consisting of 3–7 nodes, industry-standard orchestrators like Kubernetes often introduce an "operational gap" due to their resource-heavy control planes. Furthermore, traditional overlay networks such as VXLAN introduce significant latency and CPU overhead, which hinders the performance of data-intensive distributed scientific computing....
Go to contribution page -
Mr Zhenyuan Wang (Computing center, Institute of High Energy Physics, CAS, China)Track 4 - Distributed computingPoster Presentation
With the escalating processing demands of modern high-energy physics experiments, traditional monitoring tools are faltering under the dual pressures of cumbersome deployment and coarse-grained observability in high-throughput production environments. JobLens is a lightweight, one-click-deployable data collector designed to deliver fine-grained, job-level observability for HEP workloads. Its...
Go to contribution page -
Mr Jiaheng Zou (IHEP, Beijing)Track 3 - Offline data processingPoster Presentation
The Jiangmen Underground Neutrino Observatory (JUNO) is a large-scale neutrino experiment with multiple physics goals. After its completion at the end of 2024, commissioning for data taking began, followed by the commencement of official data-taking on August 26, 2025. The raw data acquired by the JUNO DAQ system is stored in a custom binary format. After transmission to the data center, this...
Go to contribution page -
Saptaparna Bhattacharya (Southern Methodist University (US))Track 5 - Event generation and simulationPoster Presentation
Fast and reliable event generation can be achieved with GPU compatible matrix element generators such as Madgraph and Pepper. In this talk, we present the first benchmarking exercise of running these event generators in ATLAS specific production workflows. The gains are reported as improvements in gridpack production (which contain precomputed matrix elements) times as well as event generation...
Go to contribution page -
Zhijun Li (Sun Yat-Sen University (CN))Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
In high energy physics experiments, visualization plays a crucial role in detector design, data quality monitoring, offline data processing, and has great potential for improving physics analysis. In addition to traditional physics data analysis based on statistical methods, visualization offers unique intuitive advantages in the search for rare signal events and in reducing background noise....
Go to contribution page -
Huey-Wen LinTrack 8 - Analysis infrastructure, outreach and educationPoster Presentation
LGT4HEP (High-Energy Physics Computing Traineeship for Lattice Gauge Theory) is a multidisciplinary training initiative designed to prepare the next generation of researchers in computational lattice field theory and high-performance computing. The program emphasizes rigorous coursework, including lattice QCD and advanced computational methods, paired with hands-on experience on...
Go to contribution page -
Pawel Kopciewicz (CERN)Track 6 - Software environment and maintainabilityPoster Presentation
This talk presents the development of an agentic chatbot for the LHCb experiment, a project realized in cooperation with ItGPT, the AI Chatbot collaboration at CERN. The assistant is intended to support learning, operations, software development, and data analysis tasks.
Go to contribution page
The LHCb knowledge base is structured in three access tiers: public, CERN-shared, and internal. The internal knowledge... -
Daniele Martello (Università del Salento & INFN Lecce)Track 3 - Offline data processingPoster Presentation
The Pierre Auger Observatory collects vast amounts of complex spatial-temporal data from extensive air showers induced by ultra-high-energy cosmic rays (UHECRs), i.e., those with energies above 10^18 eV. Determining the mass composition of the primary particle is a key challenge, as direct measurements are impossible and traditional analytical methods struggle with the complexity of shower...
Go to contribution page -
Federico Andrea Corchia (Universita e INFN, Bologna (IT))Track 3 - Offline data processingPoster Presentation
Identification (“tagging") of hadronic jets associated with charm and bottom quarks is crucial for many experimental signatures explored with the ATLAS detector at the LHC. Soft Muon Tagging (SMT) is a tagging technique based on the identification of muons from b/c -> mu + X within hadronic jets, complementary to other jet-based algorithms. With the SMT algorithm, muons can be used as a proxy...
Go to contribution page -
Yana Holoborodko (Princeton University (US))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
We present a modular alarm and visualization framework designed to detect and interpret network anomalies that lead to performance degradation in WLCG infrastructures. The system consists of two interoperable components: Alarms And Alerts System, a Kubernetes-based backend that ingests perfSONAR measurements and automatically identifies routing changes, performance degradations, and related...
Go to contribution page -
Peidong Yu (IHEP)Track 5 - Event generation and simulationPoster Presentation
The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose experiment featuring a 20,000-ton liquid scintillator central detector, water cerenkov detector and top track, primarily designed to determine neutrino mass ordering. Following detector construction completion in late 2024, the detector was successively filled with ultrapure water and then liquid scintillator (LS). As LS...
Go to contribution page -
Ioannis Tsanaktsidis (CERN)Track 6 - Software environment and maintainabilityPoster Presentation
The continuous ingestion of scientific documents from external sources into INSPIREHEP created challenges in scalability, transparency, and long term maintenance. This contribution describes the migration of our document harvesting and curation pipeline to the open source workflow orchestrator Apache Airflow. The work involved re-engineering legacy scripts and cron based tasks into modular...
Go to contribution page -
Leonardo Giannini (Univ. of California San Diego (US))Track 3 - Offline data processingPoster Presentation
The mkFit algorithm offers an implementation of the Kalman filter-based track reconstruction algorithm that exploits both thread- and data-level parallelism. mkFit has been adopted by the CMS collaboration as the main track building algorithm for both the Run-3 offline and online track reconstruction, and it has been to speed up track building by 3.5x on average, while retaining or improving...
Go to contribution page -
Gaia Grosso (IAIFI, MIT), Shelley Tong (Massachusetts Inst. of Technology (US))Track 9 - Analysis software and workflowsPoster Presentation
Machine-learning-based anomaly detection (AD) offers a promising, model-agnostic alternative to traditional LHC analyses, allowing us to search for many signals at once. Recent AI advances in representation learning motivate the use of neural embeddings to map detector data into low-dimensional latent spaces, preserving critical features ([Metzger et al., Phys. Rev. D 112, 072011 (2025)][1])....
Go to contribution page -
Catalin Codreanu (Technical University of Cluj-Napoca (RO)), Cristian Schuszter (CERN)Track 6 - Software environment and maintainabilityPoster Presentation
Modern financial operations in large scientific organizations increasingly rely on sustainable, modular, and well-integrated software ecosystems. Over the past years, the FAP-BC group of CERN has focused on modernizing key financial processes by adopting service-oriented approaches, strengthening system integrations, and reducing long-term maintenance costs.
This paper presents recent work...
Go to contribution page -
Dr Alex Owen (NetDRIVE Champion, Queen Mary University of London), Dr Sudha Ahuja (Queen Mary University of London)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
NetDRIVE (NetZero Digital Research Infrastructure Vision and Expertise) [1] is the UK Research and Innovation (UKRI) project developing plans and expertise to tackle NetZero issues around the UK’s government funded research computing or digital research infrastructure (DRI). Following on from the UKRI DRI NetZero Scoping project [2], NetDRIVE is a £4M project spread over the course of 2.5...
Go to contribution page -
James Connaughton (University of Warwick (GB))Track 2 - Online and real-time computingPoster Presentation
The LHCb experiment at the LHC employs a fully-software trigger to reconstruct and select events in real time. Key to this approach is the topological beauty (b) trigger, a set of algorithms which select decays of hadrons containing b quarks based on their distinct topology, i.e., highly displaced candidates with a large momentum. For Run 3 of the LHC, these algorithms were reimplemented...
Go to contribution page -
Andrei Berngardt (Tomsk State University)Track 5 - Event generation and simulationPoster Presentation
We present a new method for generating neutron cross-section (XS) data sets from evaluated nuclear data (HP) that improves the accuracy of XS approximation in resonance regions while maintaining computational efficiency for HEP applications in Geant4. Our approach supplements the standard XS datasets with additional dedicated resonance (R) files for the low-energy region, which is defined...
Go to contribution page -
Yuri Smirnov (Northern Illinois University (US))Track 1 - Data and metadata organization, management and accessPoster Presentation
Calibration Operations Manager Bot for ATLAS Tile Calorimeter (COMBAT) is the next-generation of the calibration management system developed for the High-Luminosity LHC era. It combines modern AI techniques with a fully asynchronous, scalable architecture to meet the evolving operational demands of the ATLAS experiment, including the transition in the database from COOL to CREST for the...
Go to contribution page -
Bruno Alves (CERN)Track 2 - Online and real-time computingPoster Presentation
The Next-Generation Trigger (NGT) program for the CMS High Level Trigger (HLT) aims at enabling full-rate recording of all the events accepted by the Level-1 Trigger at 750 kHz via a dedicated NGT Scouting stream, performing complete physics event reconstruction with no additional filtering. Reconstructed objects are stored directly in a lightweight NanoAOD format, delivering analysis-ready...
Go to contribution page -
Carla Sophie Rieger (Technische Universitat Munchen (DE))Track 1 - Data and metadata organization, management and accessPoster Presentation
Efficient database operations are crucial for processing inherently structured data. We investigate the transfer of classical database operations to their counterparts on uniform quantum superposition states of quantum data. Such data may originate from future experiments that incorporate quantum sensors and quantum memories, or by using quantum encoded classical data. Since quantum states...
Go to contribution page -
Victor Leopoldo Munoz Flores (Fermi National Accelerator Lab. (US))Track 4 - Distributed computingPoster Presentation
The File Transfer Service (FTS3) is a distributed data movement service developed at CERN and widely used to transfer data across the Worldwide LHC Computing Grid (WLCG). At Fermilab, FTS3 supports data transfers for multiple experiments, including Intensity Frontier experiments such as DUNE, enabling reliable data movement between WebDAV endpoints in Europe and the Americas.
Go to contribution page
At CHEP 2021, we... -
Dr Simon Blyth (IHEP, CAS)Track 5 - Event generation and simulationPoster Presentation
Opticks is an open source framework that accelerates Geant4 toolkit based
Go to contribution page
detector simulations by offloading the optical photon simulation to the GPU
using NVIDIA OptiX ray tracing and NVIDIA CUDA computation. Geant4 detector
geometries are auto-translated into mostly analytic Constructive Solid Geometry
forms, with only computationally demanding shapes like tori converted... -
Daniel Magdalinski (Nikhef)Track 2 - Online and real-time computingPoster Presentation
The LHCb experiment operates a full-software trigger comprising of two stages, labelled HLT1 and HLT2. The two stages are separated by a disk buffer, which not only allows the HLT2 processing to be asynchronous with respect to the data taking, it also allows real-time alignment and calibration to be performed prior to HLT2 processing. HLT2 then performs full offline-level reconstruction and...
Go to contribution page -
Alexander Rogovskiy (Rutherford Appleton Laboratory), Jyothish Thomas (STFC)Track 1 - Data and metadata organization, management and accessPoster Presentation
XRootD-Ceph is a storage plugin that allows one to access Ceph object store via xroot protocol. At RAL, we use this plugin for our disk SE. Although it has proven to be suitable for production-quality high-throughput storage, a few optimizations were needed to ensure optimal performance. In this talk we discuss the evolution of the plugin at RAL.
Some changes to the plugin were dictated by...
Go to contribution page -
Shawn Gregory Zaleski (Rheinisch Westfaelische Tech. Hoch. (DE))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
As the Large Hadron Collider (LHC) finishes collecting data in Run 3, the outlook for future data collection and analysis will require even more data storage and more powerful and efficient dedicated computing resources as it looks to collect much more data in future runs.
Go to contribution page
From the beginning of the LHC operation 15 years ago, the Germany ATLAS and CMS groups provided massive dedicated grid... -
JIANLI LIU (Computing Center, Institute of High Energy Physics, Chinese Academy of Sciences)Track 9 - Analysis software and workflowsPoster Presentation
The X-ray photon correlation spectroscopy (XPCS) retrieves the nanoscale dynamic behaviors of materials by analyzing the photon intensity fluctuations in synchrotron X-ray scattering signals. The multitau algorithm calculates delay times across different temporal scales through a hierarchical binning approach, which not only covers a wide temporal range but also controls computational...
Go to contribution page -
Wei SunTrack 6 - Software environment and maintainabilityPoster Presentation
We present a performance‑portable lattice gauge theory simulation library implemented using the Kokkos parallel programming model. The library supports efficient Monte Carlo simulations of SU(N) gauge theories across diverse hardware architectures—including CPUs (via OpenMP and Serial backends), NVIDIA GPUs (CUDA), AMD GPUs (HIP), and Intel GPUs (SYCL)—all from a single source code base. It...
Go to contribution page -
Mohammad Nasir Jan Momed (Deutsches Elektronen-Synchrotron (DE))Track 2 - Online and real-time computingPoster Presentation
When the HL-LHC starts in a few years from now the CMS experiment will be challenged with way more complex proton-proton collision events as well as an increased data logging rate. Present projections suggest that the CPU demands for reconstruction and processing will grow beyond the capacity expected from usual technology progress. Therefore, an effort has been started to optimize software to...
Go to contribution page -
Michael Johnson (University of Manchester)Track 1 - Data and metadata organization, management and accessPoster Presentation
Data volumes and rates of research infrastructures will continue to increase in the upcoming years and impact how we interact with their final data products. Little of the processed data can be directly investigated and most of it will be automatically processed
Go to contribution page
with as little user interaction as possible. Capturing all necessary information of such processing ensures reproducibility of the... -
Rocky Bala Garg (Stanford University (US))Track 3 - Offline data processingPoster Presentation
The optimization of tracking parameters in particle track reconstruction is a high-dimensional, non-convex problem with significant impact on tracking efficiency, resolution, and computational performance. As detector complexity and pileup increase, conventional heuristic and local optimization methods face scalability limitations. In this work, we will investigate quantum optimization...
Go to contribution page -
Parichehr Kangazian Kangazi (The Iranian Ministry of Science, Research and Technology (IR))Track 3 - Offline data processingPoster Presentation
Identifying jets originating from the decay of highly boosted heavy particles in colliders plays a crucial
Go to contribution page
role in uncovering potential signs of physics beyond the Standard Model. Despite significant progress in
jet-origin classification algorithms—particularly graph neural networks—the rapidly increasing volume
of collider data and the demand for faster and more efficient processing... -
Supanut Thanasilp (Chulalongkorn University)Track 3 - Offline data processingPoster Presentation
Quantum systems are well known to create non-classical patterns. The thought that they could also be used to recognize highly complex patterns hidden in data is beyond excitement, leading to the young interdisciplinary field of quantum machine learning (QML). Nevertheless, while a quantum advantage in data analysis can be in principle achieved thanks to the exponentially large Hilbert space,...
Go to contribution page -
Xuantong Zhang (Institute of High Enegry Physics, Chinese Academy of Sciences (CN))Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
The Interactive Analysis Workbench (INK) is a web-based, open-source interactive computing platform developed at the Institute of High Energy Physics, Chinese Academy of Sciences (IHEP, CAS), to address the growing demands of high-energy physics users for interactive data processing, efficient data access, and collaborative analysis workflows. INK enables users to interactively access IHEP...
Go to contribution page -
Valentina Camagni (Università degli Studi e INFN Milano (IT))Track 2 - Online and real-time computingPoster Presentation
The CMS Phase-2 Level-1 Trigger (L1T) Scouting program introduces real-time software reconstruction at the full 40 MHz rate, enabling physics analyses directly at trigger level. One of the most promising applications is the reconstruction of low-transverse-momentum (soft) hadronic tau leptons, which are essential for searches for low-mass scalars ϕ → ττ but are poorly reconstructed by existing...
Go to contribution page -
Bostjan Macek (Jozef Stefan Institute (SI))Track 2 - Online and real-time computingPoster Presentation
Future high-energy physics (HEP) experiments operate under extreme real-time constraints, where online filtering and trigger decisions increasingly define the ultimate physics reach. Although machine learning is now widely used in online systems, current deployments are almost exclusively limited to inference with offline-trained models. In this contribution, we investigate a complementary and...
Go to contribution page -
Daniel Nieto (IPARCOS-UCM)Track 2 - Online and real-time computingPoster Presentation
The Cherenkov Telescope Array Observatory (CTAO) represents the next generation of ground-based gamma-ray telescopes, designed to probe the very-high-energy (VHE) sky above 20 GeV with unprecedented sensitivity. The northern array (CTAO-North) will be composed of an ensemble of Medium-Sized Telescopes (MSTs) and four Large-Sized Telescopes (LSTs), the latter designed to detect the...
Go to contribution page -
Aashay Arora (Univ. of California San Diego (US))Track 1 - Data and metadata organization, management and accessPoster Presentation
The increasing adoption of columnar data formats and lightweight event representations, such as CMS NanoAOD, has made remote data access a significant factor in the performance of physics analysis workflows. In this context, understanding the performance characteristics of different data serving technologies under realistic network conditions is critical.
This work presents a comparative...
Go to contribution page -
Ben Jones (CERN)Track 4 - Distributed computingPoster Presentation
The WLCG Tier-0 Accounting service provides accounting information for scientific computing resources at CERN (Batch, HPC, and BOINC). The service delivers essential information on CPU and walltime usage, which plays a key role in decision-making and planning processes for CERN resource managers across experiments and departments. It also supplies monthly usage data to the WLCG Accounting...
Go to contribution page -
Tibor Simko (CERN)Track 9 - Analysis software and workflowsPoster Presentation
Dask is a Python library for scaling Python analysis code from local computers to large data centre clusters. Dask is becoming more popular in astronomy and particle physics communities for carrying out data analyses. We describe how we extended the REANA reproducible analysis platform to support Dask workloads. A special attention was paid to respect the Dask version requested by the analyst,...
Go to contribution page -
闫明宇 myyanTrack 7 - Computing infrastructure and sustainabilityPoster Presentation
To address the issues of traditional penetration testing being highly labour-intensive, fragmented tooling, and cumbersome processes, this paper aims to design an automated penetration testing system to reduce the manpower and time expenditure during testing.This paper proposes an overall design framework for an automated penetration testing system. The system comprises two components: an...
Go to contribution page -
Kasidit Srimahajariyapong (Chulalongkorn University)Track 3 - Offline data processingPoster Presentation
The rapid proliferation of quantum machine learning (QML) has highlighted critical bottlenecks in conventional Variational Quantum Algorithms (VQAs), particularly regarding trainability, scalability, and the absence of rigorous optimal solution guarantees. These challenges motivate us to search for alternative optimization paradigms. In this work, we introduce the Double-Bracket Quantum...
Go to contribution page -
Andrea RendinaTrack 1 - Data and metadata organization, management and accessPoster Presentation
INFN-CNAF is the national computing center of the INFN (National Institute for Nuclear Physics), dedicated to research and development in information technologies for subnuclear, nuclear, and astroparticle physics. CNAF hosts the largest INFN data center and operates a WLCG Tier-1 site.
For more than 15 years, tape data management at CNAF has been handled using the Grid Enabled Mass Storage...
Go to contribution page -
AVIK DE (universiti malaya)Track 9 - Analysis software and workflowsPoster Presentation
\documentclass[11pt, a4paper, twoside]{article}
\usepackage{geometry}
\geometry{top=2.54cm, bottom=2.54cm, left=2.54cm, right=2.54cm}\title{Scalar-Tensor Extension of Non-Metricity Gravity}
\author{AVIK DE}
%\date{January 2026}\begin{document}
\maketitle
\begin{abstract}
Go to contribution page
We present a computation-first study of scalar--tensor extensions of symmetric teleparallel... -
Mateusz Jakub Fila (CERN)Track 2 - Online and real-time computingPoster Presentation
The Next Generation Trigger (NGT) project at CERN aims to extract more physics information from the High Luminosity LHC data. To achieve this, GPUs and other accelerators are being increasingly adopted in LHC experiments, running both procedural code and AI/ML inferences.
As a result, formerly CPU-only modules in the event reconstruction frameworks now interleave their computations with...
Go to contribution page -
Frank Ellinghaus (Bergische Universitaet Wuppertal (DE))Track 6 - Software environment and maintainabilityPoster Presentation
Monte-Carlo (MC) simulations play a key role in high energy physics. MC generators and their interfaces to the experiment-specific software framework evolve continuously. Thus, a periodic validation is indispensable for obtaining reliable and reproducible physics simulations. For that purpose, ATLAS has developed a central semi-automated validation system: PMG Architecture for Validating Evgen...
Go to contribution page -
John WinnickiTrack 9 - Analysis software and workflowsPoster Presentation
LUX-ZEPLIN (LZ) is a dark matter direct-detection experiment using a dual-phase xenon time projection chamber. The LZ experiment has set world-leading limits on WIMP-nucleon interactions. At low energies, backgrounds built from the spurious pairing of unrelated charge and light signals, also known as accidentals, pose a significant analysis challenge. In this work, we study modern unsupervised...
Go to contribution page -
Nora Bluhme (Goethe University Frankfurt (DE))Track 3 - Offline data processingPoster Presentation
The Compressed Baryonic Matter (CBM) experiment at the upcoming Facility for Antiproton and Ion Research (FAIR) will investigate heavy-ion collisions at interaction rates of up to $10^7\, \text{s}^{-1}$.
Go to contribution page
To fully exploit the intrinsic precision of the tracking detectors, an accurate alignment of all sensor elements is essential. Track-based software alignment determines small but critical... -
Gaia Grosso (IAIFI, MIT)Track 9 - Analysis software and workflowsPoster Presentation
Modern machine learning has revolutionized our ability to extract rich and versatile data representations across scientific domains. However, the statistical properties of these representations are often poorly controlled, challenging the design of robust downstream anomaly detection (AD) methods.
Go to contribution page
We identify three principled desiderata for anomaly detection in latent spaces under minimal... -
Paul James Laycock (Universite de Geneve (CH))Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
The first direct observation of Gravitational Waves (GWs) in 2015, produced by the collision of black holes, instigated a demand for open access to GW data. The Einstein Telescope will increase the detection rate of GWs by a factor of a thousand compared to current detectors, producing information-rich data containing a wealth of astrophysical signals. This surge in information density,...
Go to contribution page -
Matthias Schott (CERN / University of Mainz)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
Training state-of-the-art neural networks for high-energy physics (HEP) tasks typically requires massive, fully simulated datasets—whose generation is both computationally expensive and experiment-specific. In this work, we demonstrate that this dependence on large-scale full simulations can be drastically reduced by leveraging pretrained models trained on fast-simulation data. These...
Go to contribution page -
Ilias Tsaklidis (University of Bonn)Track 9 - Analysis software and workflowsPoster Presentation
SysVar is a Python package that provides an end-to-end solution for the treatment and propagation of systematic uncertainties in analyses relying on templates generated from simulated data.
Propagating systematic uncertainties from correction weights into templates while preserving correlations in the signal extraction variables becomes increasingly challenging as analyses scale in size....
Go to contribution page -
Hao Hu (Institute of High Energy of Physics)Track 1 - Data and metadata organization, management and accessPoster Presentation
China’s High Energy Photon Source (HEPS) is the first national high-energy synchrotron radiation light source and one of the world’s brightest fourth-generation synchrotron radiation facilities. It started to operate and conduct user experiments at the end of 2025.
Go to contribution page
The 14 beamlines for the phase I of HEPS are projected to produce more than 300PB raw data annually. Efficiently storing,... -
Stefano Dal Pra (INFN)Track 8 - Analysis infrastructure, outreach and educationPoster Presentation
The Open Access Repository, active since 2020, is the official INFN archive to host its research outputs according to FAIR principles. We describe its architectural and functional evolution, marked by the migration from Invenio v3 to the high-availability deployment based on Invenio RDM. Several technical issues due to the large "version jump" between the source and target platforms have been...
Go to contribution page -
Dr Alexandre CamsonneTrack 2 - Online and real-time computingPoster Presentation
The Solenoidal Large Intensity Device (SoLID) at Jefferson Laboratory (JLab) is a large acceptance detector designed to be able to handle the high luminosity available at JLab. I will present the plans for the baseline the triggered data acquisition system for the two main configuration and also discuss a streaming readout option.
Go to contribution page -
Dr Danila Oleynik (Joint Institute for Nuclear Research (RU))Track 4 - Distributed computingPoster Presentation
The Spin Physics Detector (SPD) collaboration is building a versatile detector at the second interaction point of the NICA (Nuclotron-based Ion Collider fAcility) complex. As the detector's development progresses and the physics research program evolves, the demands for advanced data processing capabilities increase.
Go to contribution page
A defining feature of the facility is its triggerless (free-run) Data... -
Yisheng Fu (Chinese Academy of Sciences (CN))Track 5 - Event generation and simulationPoster Presentation
The LHCb experiment is planning a second major upgrade (Upgrade II) in the 2030s, with the goal of increasing the instantaneous luminosity to 1.0x1034 cm−2s−1. This upgrade aims to enhance the study of heavy flavor physics and to search for potential signals of new physics in the beauty and charm quark sectors. To operate under the demanding conditions of Upgrade II—characterized by higher...
Go to contribution page -
Yuri Smirnov (Northern Illinois University (US))Track 1 - Data and metadata organization, management and accessPoster Presentation
The ATLAS TileCalibWeb Robot application is the core tool in the Tile Calorimeter and the main interface for preparing and recording conditions and calibration data into the Online and Offline ORACLE Databases used daily by on-duty data quality control specialists and experts.
Go to contribution page
During LHC Run 3 TileCalibWeb Robot was significantly improved with numerous changes. These enhancements... -
saurav mittalTrack 9 - Analysis software and workflowsPoster Presentation
We present a topology-informed approach for classifying particle jets using persistent homology, a framework that captures the structural properties of point clouds. Particle jets produced in proton-proton collisions consist of cascades of particles originating from a common hard interaction. Each jet constituent is represented as a point in a three-dimensional feature space defined by the...
Go to contribution page -
Gianluca Sabella (University Federico II and INFN, Naples (IT))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
The ICSC initiative (Italian National Centre for High-Performance Computing, Big Data, and Quantum Computing) is creating a flexible cloud platform to manage the escalating computational requirements of the High-Luminosity Large Hadron Collider (HL-LHC) and future collider projects. This approach leverages Kubernetes for orchestration and containerized deployments to streamline access to...
Go to contribution page -
Dr Ani Fox Bochenkov (CIQ)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
Increasingly intensive AI and simulation workloads are driving thermal stress across large-scale HPC environments. As compute centres prepare for the next performance phase, conventional optimisation practices no longer align with ESG targets or hardware lifecycle requirements. This contribution presents a proven infrastructure-level methodology for energy-aware runtime orchestration that...
Go to contribution page -
Oliver Lantwin (Universitaet Siegen (DE))Track 5 - Event generation and simulationPoster Presentation
The SHiP experiment will search for new physics at the intensity frontier, particularly for feebly interacting particles. Full simulation of the signal and background is crucial to reach the planned sensitivity and to refine the subsystem designs for their TDRs. Besides standard event generators and
Go to contribution pageGeant4, custom approaches are used for the efficient simulation of the thick target and... -
Diego Ciangottini (INFN, Perugia (IT))Track 1 - Data and metadata organization, management and accessPoster Presentation
The Italian National Institute for Nuclear Physics (INFN) has been operating for more than 20 years the largest scientific distributed computing infrastructure: the Tier-1 at Bologna-CNAF and the 9 Tier-2 centres provide computing and storage resources to support more than 100 scientific collaborations.
Go to contribution page
In the last years this computer infrastructure has been expanded and modernized, also... -
Dr Alexey BoldyrevTrack 5 - Event generation and simulationPoster Presentation
Detector response simulation is a computationally expensive step in the Monte Carlo production chain for High Energy Physics experiments. For the MPD experiment at NICA (JINR), we developed a method to accelerate the simulation of the Time Projection Chamber (TPC) response using a Generative Adversarial Network (GAN). Trained on data from standard GEANT4-based simulations, the GAN replaces...
Go to contribution page -
Mr Jiajv Wang, Prof. Linghui WuTrack 3 - Offline data processingPoster Presentation
An upgrade of the inner tracker for the BESIII experiment has been completed in 2024. A three-layer Cylindrical GEM (CGEM) detector was installed in the BESIII detector, replacing the original inner drift chamber. For detector commissioning and alignment, cosmic-ray data were taken both with and without a magnetic field. A track reconstruction algorithm combing CGEM inner tracker (CGEM-IT)...
Go to contribution page -
Angela Maria Burger (Centre National de la Recherche Scientifique (FR))Track 3 - Offline data processingPoster Presentation
Transformer architectures have rapidly become the state-of-the-art approach for machine-learning models across many domains in science, offering unprecedented performance on complex, high-dimensional tasks. Their adoption within the ATLAS experiment, starting with their usage for flavour tagging, has opened new opportunities, but also introduced substantial challenges regarding large-scale...
Go to contribution page -
Woojin Jang (University of Seoul, Department of Physics (KR))Track 9 - Analysis software and workflowsPoster Presentation
This study explores the feasibility of directly determining the CKM matrix element $|V_{ts}|$ through the rare top quark decay $t \to sW$ in the semileptonic final state of $t\bar{t}$ production. To overcome the significant background challenges inherent in this channel, we introduce a Transformer-based multi-domain $t\bar{t} \to sWbW$ signal event classifier that integrates both jet...
Go to contribution page -
CMS CollaborationTrack 2 - Online and real-time computingPoster Presentation
The High-Level Trigger (HLT) of the Compact Muon Solenoid (CMS) selects event data in real time, reducing the data rate from hundreds of kHz to few kHz for offline storage. With the upcoming Phase-2 upgrade of the CMS experiment, data volumes are expected to increase substantially, making efficient, lossless compression essential for sustainable storage and processing.
Recent work has shown...
Go to contribution page -
Mr Suwannachad Suwannajitt (Chulalongkorn University)Track 3 - Offline data processingPoster Presentation
Quantum Imaginary Time Evolution (QITE) has recently received increasing attention as a pathway for ground state preparation on quantum hardware. However, the efficiency of this approach is frequently compromised by energy plateau, dynamical regimes characterized by vanishing energy reduction where the system stagnates near some metastable states. In this work, we dissect the anatomy of these...
Go to contribution page -
Andy MORRISTrack 6 - Software environment and maintainabilityPoster Presentation
Since 2015, LHCb’s central onboarding resource for new collaborators has been the Starterkit, a set of self-study lessons that also form the basis of an annual in-person workshop in Geneva. Ahead of Run 3 (2022–2026), a new version of the Starterkit was
Go to contribution page
developed to accompany the Upgrade I software stack, with improved testing and updated exercises now used in the workshop.
However,... -
ATLAS CollaborationTrack 2 - Online and real-time computingPoster Presentation
Trigger bandwidth limitations constrain physics analyses that target low-mass resonances, where high-rate data collection is essential. To circumvent this limitation Trigger-Level Analysis (TLA) can be applied. A recent publication by the ATLAS experiment demonstrated this approach during LHC Run 2 by processing a massive dataset of over 60 billion events, more than twice the number of fully...
Go to contribution page -
Matthias Schott (CERN / University of Mainz)Track 9 - Analysis software and workflowsPoster Presentation
Neural networks (NNs) are inherently multidimensional classifiers that learn complex, non-linear relationships among input observables. While their flexibility enables unprecedented performance in high-energy physics (HEP) analyses, it also makes them sensitive to small variations in their inputs. Consequently, the propagation and estimation of systematic uncertainties in NN-based models...
Go to contribution page -
Mwai KarimiTrack 1 - Data and metadata organization, management and accessPoster Presentation
Modern data ecosystems are increasingly heterogeneous, with data and metadata distributed across multiple databases, file systems, and external services. This fragmentation creates challenges for organising data, managing systems, and enabling efficient access. This poster presents an approach for unifying access to distributed data sources using PostgreSQL Foreign Data Wrappers (FDWs)....
Go to contribution page -
Dr Marcus Ebert (University of Victoria)Track 1 - Data and metadata organization, management and accessPoster Presentation
We present an update on the usage of the Canadian Belle II raw data storage and computing infrastructure. The raw data storage system is a ZFS based system and the data access is managed by XRootD; without a WLCG accessible tape system. The system is in production for two years now and we will present our experience with such a system and how it was extended beyond the use as a raw data...
Go to contribution page -
Prof. Qingmin ZhangTrack 5 - Event generation and simulationPoster Presentation
Geant4 is an object-oriented C++ toolkit widely used for simulating the passage of particles through matter, especially in nuclear physics research. However, its application requires a high level of programming proficiency, which often hinders broader adoption in scientific work. To lower the technical barriers associated with Geant4, we previously introduced a wizard-style GUI and modular...
Go to contribution page -
Dmitriy MaximovTrack 2 - Online and real-time computingPoster Presentation
The KEDR experiment is ongoing at the VEPP-4M $e^{+}e^{-}$ collider at Budker INP in Novosibirsk. The collider center of mass energy range covers a wide spectrum from 2 to 11 GeV. Most of the up-to-date statistics were taken at the lower end of the energy range around the charmonia region. Activities at greater energies up to the bottomonia lead to a significant increase of event recording...
Go to contribution page -
Mingrun LiTrack 9 - Analysis software and workflowsPoster Presentation
Uproot-custom is an extension of the popular Python ROOT-IO library Uproot that offers a mechanism to enhance TTree data reading capabilities without relying on ROOT. It provides native support for reading more complex TTree data formats (such as deeply nested containers and memberwise-stored data), and a registration mechanism that allows users to customize reading logic to meet their...
Go to contribution page -
Michal Svatos (Czech Academy of Sciences (CZ))Track 7 - Computing infrastructure and sustainabilityPoster Presentation
The distributed computing system of the ATLAS experiment at the Large Hadron Collider (LHC) uses resources from several EuroHPC facilities through both allocated and opportunistic access. HyperQueue, a meta-scheduler developed at IT4Innovations, the Czech National Supercomputing Center, enables the experiment's workload to be adapted to the many-core architecture typical of modern HPC systems....
Go to contribution page -
Francisco Borges Aurindo Barros (CERN)Track 7 - Computing infrastructure and sustainabilityPoster Presentation
For over a decade, content management systems at CERN have been served by the on-premise Drupal service. In response to the high maintenance requirements of Drupal, the growing adoption of WordPress and the need to improve user experience, site management and governance, the WordPress service was established. The WordPress service provides a managed platform designed to empower and support the...
Go to contribution page
Choose timezone
Your profile timezone: