ROOT Users Workshop 2025

Europe/Madrid
Pedro Alonso Jordá (Universitat Politécnica de Valencia), Marta Czurylo (CERN), Vincenzo Eduardo Padulano (CERN), Danilo Piparo (CERN), Philippe Canal (Fermi National Accelerator Lab. (US)), Silia Taider (CERN)
Description

The 13th ROOT Users Workshop will be held in Valencia, Spain. It is an occasion to discuss ROOT and its related activities today and to help shape the future of the project. The workshop features four and a half days of presentations, discussions, interactions, tutorials, and everything related to ROOT and its various interactions with other HENP software projects. As the end of Run 3 approaches, this workshop is also an opportunity to reflect on how the ROOT project can help address the future computing challenges that HL-LHC and other scientific experiments will provide. In particular, there will be four main topical areas of interest: Analysis, I/O & Storage, Math & Stats, and Scientific Python Ecosystem. The event is foreseen to be in-person only to promote social interactions and the exchange of ideas. To celebrate the community spirit, the event will be complemented by a social dinner and a guided tour of the city of Valencia.

The participation of students is highly valued and encouraged. Hence, a limited number of reduced fee registrations are available for this category. Make sure to register soon to secure your spot!

Event partners

 

Registration
Workshop registration
Participants
    • 08:00 09:00
      Workshop registration 1h
    • 09:00 10:30
      Morning Session I
      • 09:00
        Welcome from the Local Organisers 10m
      • 09:10
        An overview of the ROOT project 20m
        Speaker: Danilo Piparo (CERN)
      • 09:30
        Physics at the HL-LHC scale 30m
        Speaker: Arantza Oyanguren (IFIC - Valencia)
      • 10:00
        FCCAnalyses: a ROOT-based Framework for End-to-End Physics Analysis at the FCC 30m

        We present an overview of the FCC analysis framework, designed to streamline user workflows from event processing to final results. Built on top of ROOT’s RDataFrame and the EDM4hep data model, FCCAnalyses provides a coherent environment for dataset processing, visualization, plotting, and statistical fitting. We will highlight the full analysis chain from the user perspective, including different execution modes, distributed computing integration (HTCondor, Slurm, Dask Gateway), and technical integrations such as TMVA and machine learning interfaces. The presentation will conclude with a discussion of current challenges and planned improvements.

        Speaker: Juraj Smiesko (CERN)
    • 10:30 11:00
      Coffee Break
    • 11:00 13:00
      Morning Session II: Reports from the LHC experiments
      • 11:00
        ALICE 30m
        Speaker: Sandro Christian Wenzel (CERN)
      • 11:30
        ATLAS 30m
        Speakers: Gordon Watts (University of Washington (US)), Vincenzo Innocente (Fermi National Accelerator Lab. (US))
      • 12:00
        CMS 30m
        Speaker: Philippe Canal (Fermi National Accelerator Lab. (US))
      • 12:30
        ROOT in the LHCb Software Framework - past, present and future 30m

        The LHCb Collaboration has been using ROOT in its software framework from the beginning, in more or less efficient way. Here summarize the evolution of our use of ROOT to then glimpse into the possible paths in view of LHC Run4 and LHCb Upgrade 2.

        Speaker: Marco Clemencic (CERN)
    • 13:00 14:30
      Lunch Break
    • 14:30 15:00
      Afternoon session I
      • 14:30
        Software at CERN between 1980's and 2000 15m
        Speaker: Rene Brun
      • 14:45
        30 years of ROOT 15m
        Speaker: Fons Rademakers (CERN)
    • 15:00 16:30
      Afternoon session I: Training
    • 16:30 17:00
      Coffee Break
    • 17:00 18:30
      Afternoon Session II: Training
      • 17:00
        Implementation of an $H\rightarrow \tau\tau$ Analysis within a ROOT-based C++ Framework using ATLAS Open Data 20m

        Integration of an analysis of Higgs boson decaying to a pair of tau leptons into a ROOT-based C++ framework, making use of recent ATLAS Open Data release. The workflow demonstrates how modern analysis strategies can be easily implemented within a modular structure that combines event selection and object reconstruction. Particular emphasis is placed on the use of ROOT for handling large datasets and producing complex histograms. The framework is designed to be accessible to students and researchers, providing an educational platform while remaining scalable for advanced physics studies. Results are illustrated with reconstructed di-tau invariant mass distributions, highlighting the pedagogical value of the ATLAS Open Data resources using ROOT.

        Speaker: Morvan Vincent (Univ. of Valencia and CSIC (ES))
      • 17:20
        ROOT Training Session 1h 10m
        Speakers: Marta Czurylo (CERN), Silia Taider (CERN), Dr Vincenzo Eduardo Padulano (CERN)
    • 18:30 20:00
      Social programme
      • 18:30
        Welcome Reception 1h 30m
    • 08:55 09:00
      Announcements 5m
      Speaker: Danilo Piparo (CERN)
    • 09:00 10:30
      Morning Session I
    • 10:30 11:00
      Coffee Break
    • 11:00 13:00
      Morning Session II: Analysis I
      • 11:00
        Overview of the current RDataFrame efforts 20m
        Speakers: Marta Czurylo (CERN), Stephan Hageboeck (CERN), Dr Vincenzo Eduardo Padulano (CERN)
      • 11:20
        Benchmarking ROOT-Based Analysis Workflows for HL-LHC: The CMS INFN AF Use Case 20m

        For a few years, INFN has been developing R&D projects in the context of CMS high-rate analyses in view of the challenges of the HL-LHC phase, starting in 2030. Since then, several studies have been implemented and as a result a prototype for an analysis facility has been proposed, in compliance with CMS analysis frameworks (e.g. based on Coffea, ROOT RDataFrame), and also adopting open-source industry standards for flexibility and ease of use.

        All this gave the opportunity to implement systematic studies on ROOT-based workflows at scale. In this context, we present a concrete example of an analysis exploiting ROOT RDataFrame.

        Recently, new opportunities in Italy have opened up the possibility of enhancing the benchmarking approach, towards specialised resources (named HPC bubbles) hosted at INFN Padova, featuring three types of heterogeneous nodes: CPU (192 cores, 1.5TB RAM, NVMe) and GPU (4 $\times$ NVIDIA H100 80GB). Preliminary results and future plans for detailed studies on metrics such as event rate, I/O operations, energy efficiency, as well as performance comparisons between TTree and RNTuple, and finally validation with a real CMS analysis will be given.

        Speaker: Luca Pacioselli (INFN, Perugia (IT))
      • 11:40
        Discussion 20m
      • 12:00
        Vibe Plotting: ServiceX, RDataFrame, and a LLM 20m

        Can we teach an LLM to plot experimental HEP data? Modern particle physics workflows increasingly depend on a complex software ecosystem that connects large datasets, distributed data delivery, and user-level analysis tools. We demonstrate how a Large Language Model (LLM) can act as a coding assistant that bridges these components. Starting from a high-level user request—such as “plot jet transverse momentum in dataset X”—the LLM generates code that orchestrates ServiceX for columnar data delivery, configures RDataFrame operations, and produces the plot. The system dynamically adapts code generation to experimental conventions and schema variations, reducing the barrier to entry. We present examples of successful end-to-end workflows, discuss failure modes and reliability issues, and outline the challenges of integrating such assistants into production environments.

        Speaker: Gordon Watts (University of Washington (US))
      • 12:20
        NDMSPC: Addressing THnSparse Challenges in High-Dimensional Analysis 20m

        The NDMSPC project introduces an innovative THnSparse analysis framework for high-dimensional data, directly addressing the challenge of fitting N-dimensional histograms within memory. Our solution organizes the THnSparse structure into sub-chunks, optimizing memory efficiency and enabling scalable processing. Beyond numerical values, the framework allows bin content to be arbitrary objects, greatly expanding data representation capabilities. A key feature is the ability to execute user-defined jobs for each bin, providing powerful and flexible custom analysis. This presentation will highlight the NDMSPC framework's design, benefits, and applications in advanced scientific data analysis.

        Speaker: Martin Vala (Pavol Jozef Safarik University (SK))
      • 12:40
        Discussion 20m
    • 13:00 14:30
      Lunch Break
    • 14:30 16:30
      Afternoon session I: I/O and Storage I
      • 14:30
        ROOT I/O: Overview 20m
        Speaker: Jakob Blomer (CERN)
      • 14:50
        A first look at RFile 20m
        Speaker: Giacomo Parolini (CERN)
      • 15:10
        Discussion 20m
      • 15:30
        Performance Comparison of Lossless Compression Algorithms on CMS Data using ROOT TTree and RNTuple 20m

        The High-Level Trigger (HLT) of the Compact Muon Solenoid (CMS) processes event data in real time, applying selection criteria to reduce the data rate from hundreds of kHz to around 5 kHz for raw data offline storage. Efficient lossless compression algorithms, such as LZMA and ZSTD, are essential in minimizing these storage requirements while maintaining easy access for subsequent analysis. Multiple compression techniques are currently employed in the experiment's trigger system. In this study, we benchmark the performance of existing lossless compression algorithms used in HLT for RAW data storage, evaluating their efficiency in terms of compression ratio, processing time, and CPU/memory usage. In addition, we investigate the potential improvements given by the introduction, in the CMSSW software framework, of RNTuples: the next-generation data storage format developed within the ROOT ecosystem. We explore how the new format can enhance data read efficiency and reduce storage footprints compared to the traditional format. With the upcoming Phase-2 upgrade of the CMS experiment, efficient compression strategies will be essential to ensure sustainable data processing and storage capabilities. This work provides insights into the different compression algorithms and how the new RNTuples data format can contribute to addressing the future data challenges of the CMS experiment.

        Speaker: Simone Rossi Tisbeni (Universita Di Bologna (IT))
      • 15:50
        User Story: Integration of ROOT RNTuple to CMSSW's SoA data structures 20m

        The Struct of Arrays (SoA) layout separates structure fields into individual arrays, each holding a single attribute across all elements. Compared to the traditional Array of Structures (AoS), SoA improves data locality, vectorisation, cache usage, and memory bandwidth—critical for heterogeneous computing. In CMSSW, SoA is implemented using the Boost::PP library to provide a clean, user-friendly interface. This work outlines CMSSW’s SoA features, their integration with ROOT RNTuple, and highlights recent developments and planned extensions. We also present CMSSW’s requirements for the RNTuple format moving forward.

        Speaker: Markus Holzer (CERN)
      • 16:10
        Discussion 20m
    • 16:30 17:00
      Coffee Break
    • 17:00 18:30
      Afternoon Session II: Scientific Python and Language interoperability
      • 17:00
        An overview of ROOT in Python 10m
      • 17:10
        The ROOT Python Bindings Features 20m
        Speaker: Silia Taider (CERN)
      • 17:30
        Compiler Research in the Open: Connecting People, Projects, and Progress 20m

        The compiler-research.org initiative aims to make compiler
        research more visible, interconnected, and sustainable across academia
        and industry. Beyond research infrastructure, it pioneers a new model
        for open-source education and mentorship offering remote, project-based
        training in advanced compiler and systems engineering for early-career
        professionals. This approach fills a critical gap in scientific software
        development by fostering deep technical mentorship, alignment with
        research goals, and long-term engagement through meaningful open-source
        contributions. Graduates emerge ready for research-driven roles in AI,
        high-performance computing, and scientific software engineering.

        This talk will present an overview of the project’s goals and evolving
        open-source architecture for organizing compiler knowledge in a
        community-driven way. We’ll explore how foundational compiler and
        systems tools foster modern data science; highlight recent achievements
        and ongoing collaborations; and share our vision for enabling
        cross-disciplinary progress through shared infrastructure and open,
        connected research.

        Speaker: Vassil Vasilev (Princeton University (US))
      • 17:50
    • 09:00 10:15
      Morning Session I
      • 09:00
        The View from Deep Underground: DUNE’s Perspective on ROOT 30m

        The Deep Underground Neutrino Experiment (DUNE) plans to collect physics
        data for over 10 years, starting in 2029. The full DUNE design consists of four far
        detectors with multiple 10 kT fiducial mass LArTPCs and a heterogeneous near
        detector complex with 1300 km between them. This combination of technologies
        and readout time scales is expected to require large storage volumes with on the
        order of several GBs stored per readout window in the far detector alone. DUNE
        currently relies on ROOT for a columnar binary data storage format, geometry
        modelling, fitting algorithms, and more. Before the start of data taking, DUNE
        is replacing its art framework with the new Phlex framework, with RNTuple
        support through the FORM I/O toolkit. Additionally, DUNE analysis tools
        continue to integrate more use of scientific python where possible. Therefore,
        the development of ROOT 7 is particularly timely for DUNE. This presentation
        will survey how ROOT performs in DUNE’s current software stack and where
        the future of ROOT can have a meaningful impact on this experiment.

        Speakers: Andrew Paul Olivier (Argonne National Laboratory), Jeremy Wolcott
      • 09:30
        Desiging and evolving a (Py)ROOT based software framework for SHiP 30m

        As a low background experiment at the intensity frontier, the SHiP experiment faces many unique computing challenges compared to collider experiments. At the same time, the experiment is small, with very limited person power, and everything is evolving quickly in preparation for the TDRs, including core parts of the software, which need to be replaced while avoiding disruption to future-proof the framework for the coming two decades.
        ROOT and especially PyROOT are core parts of our framework and allow us to present a consistent, pythonic and stable interface to the packages used in the framework, abstracting individual components from the user.
        This talk will give an overview of the computing challenges, future directions and the current and future use of ROOT within the collaboration.

        Speaker: Oliver Lantwin (Universitaet Siegen (DE))
      • 10:00
        Innova Physics UPV: Acceleration Brings Hope 15m
        Speaker: Mateo Gajić Sales (Universitat Politecnica de Valencia)
    • 10:15 11:00
      Poster session
      • 10:16
        Cross compilation of the ROOT libraries 1m

        Cross compilation allows the build of binaries of a software package for different platforms without the need to use different hardware and operating system for each platform beging supported. The Julia binary registry, uses it to ease the support of a number of platforms (ARM, x86, PowerPC combined with MacOS, Linux, and Windows). Support for the Julia binary registry has motivated this work on a cross-compilation method for ROOT. The build process of ROOT, involves building the rootcling executable used to generate the I/O class dictionary source code. Extending the ROOT build system to support cross compilation presents two challenges. The first is the need to build binaries for two different platforms, the host where the build is happening and the target. The second is to extend rootcling to allow the generation on one platform of code that must be run on another platform, in other words, cross-code-generation. So far, cross-compilation for a different C library (glic/musl) and/or different CPU (aarch64/x86_64) has been achieved. The compilation is done in an Alpine Linux container. We will present how this cross compilation is performed.

        Speaker: Philippe Gras (Université Paris-Saclay (FR))
      • 10:16
        Enhancing CMS data analysis with Distributed RDF on a high-rate platform 1m

        A flexible and dynamic analysis environment, capable of efficiently accessing and processing distributed data and resources, is essential for High Energy Physics (HEP) in both current and future LHC operations. This contribution presents the development and evolution of a scalable analysis platform that combines open-source standards with the computing resources provided by the Italian National Center for “HPC, Big Data and Quantum Computing” (ICSC).
        Its performance and scalability are assessed through a study of the CMS Drift Tubes (DT) muon detector performance in phase-space regions driven by analysis needs, leveraging the declarative and quasi-interactive framework of ROOT RDataFrame (RDF) with its distributed execution through Dask. Scaling and speed-up metrics are reported and discussed, highlighting the benefits of the new RDF-based approach with respect to the legacy serial workflow.

        Speaker: Tommaso Diotalevi (Universita e INFN, Bologna (IT))
      • 10:16
        NDimensional Visualization in 3D Virtual Environments 1m

        Thanks to the recent technological advances, immersive virtual reality becomes a viable option in various areas, including data visualization. In this talk, we present NDMVR, a software solution for n-dimensional histograms visualization in web-based virtual reality. While NDMVR utilizes JSROOT for certain base functionalities, its distinct features are nested histogram support and reactive approach to internal and user communication.

        Speaker: Daniel Chovanec (Technical University of Košice (SK))
      • 10:16
        RNTuple for SND@HL-LHC 1m

        With the latest ROOT version (6.36), the RNTuple API is stable as well as its on-disk format: data written today will be readable with the future versions of ROOT. RNTuple is the successor of the TTree format, bringing many advantages such as faster read / write, type safety, use of modern smart pointers, suitability for parallelized hardware and asynchronous operations. SND@HL-LHC will be an upgraded version of the current SND@LHC experiment that will operate during the High Luminosity phase of LHC. As for other Run 4 experiments, using RNTuple seems like the most sound option.
        The first step for SND@HL-LHC is dropping the usage of TClonesArray within its software in favor of an event data model based on STL containers, optimised for IO rate. Then, tuning the SND@HL-LHC software stack in order to work with RNTuple.

        Speaker: Filippo Mei (Universita e INFN, Bologna (IT))
      • 10:16
        The R3BRoot framework 1m

        The R3B (Reactions with Relativistic Radioactive Beams) experiment at GSI/FAIR is devoted to exploring the properties of nuclei located far from the valley of stability, with particular emphasis on their structure and reaction dynamics. To address these scientific objectives, a highly versatile reaction setup has been developed. This setup combines high detection efficiency, large geometrical acceptance, and excellent energy and momentum resolution, enabling kinematically complete measurements of a broad range of nuclear reactions induced by high-energy radioactive beams produced in inverse kinematics at relativistic velocities.
        To support the design, operation, and analysis of the R3B setup, R3BRoot, a dedicated simulation and analysis framework based on ROOT, has been developed. R3BRoot builds upon the FairSoft and FairROOT frameworks, providing a unified platform for detector simulations that covers the full chain from event generation and digitization to reconstruction, as well as both online and offline analysis. Its modular structure is organized around tasks and parameter containers for calibration, mapping, and the storage of reconstruction variables, enabling users to configure detector responses, calibration constants, and analysis strategies in a flexible and reproducible manner.
        A key feature of R3BRoot is the implementation of the Virtual Monte Carlo (VMC) interface, which allows the seamless use of different transport engines (e.g., GEANT3 and GEANT4) without requiring modifications to detector geometries or user code. This abstraction is particularly valuable for benchmarking simulations with different physics models and for ensuring long-term code sustainability. Furthermore, R3BRoot integrates ROOT I/O, visualization, histogramming, and online analysis tools, ensuring full compatibility with the workflows commonly employed in the nuclear and high-energy physics communities.
        In this contribution, we present the design principles and current capabilities of R3BRoot, illustrate its use in detector development and data analysis, and discuss recent improvements and applications within the R3B Collaboration.

        Speaker: Pablo Gonzalez Russel (University of Santiago de Compostela)
      • 10:17
        Batch generation for ML training 1m
        Speaker: Martin Foll (University of Oslo (NO))
      • 10:17
        GPU Accelerated Analyses 1m
        Speakers: Devajith Valaparambil Sreeramaswamy, Lukas Breitwieser (CERN)
      • 10:17
        JSROOT and Web-graphics 1m
        Speaker: Serguei Linev (GSI Darmstadt)
      • 10:17
        On the Path to ROOT 7 1m
        Speaker: Stephan Hageboeck (CERN)
      • 10:17
        RDF – (distributed) interactive analysis on 1000 cores 1m
        Speakers: Marta Czurylo (CERN), Stephan Hageboeck (CERN), Dr Vincenzo Eduardo Padulano (CERN)
      • 10:17
        Recent Cling Developments 1m
        Speaker: Devajith Valaparambil Sreeramaswamy
      • 10:17
        REve 1m

        REve is a rewrite of EVE for the ROOT-7 era, using modern C++ and relying on ROOT’s built-in http server for communication with GUI clients. Part of REve is also implemented in JavaScript and uses OpenUI5, JSROOT, and RenderCore as its foundation libraries.

        FireworksWeb is a CMS application built around REve. Several advanced legacy Fireworks features have been ported into REve in an experiment-independent manner, relying heavily on Cling, the C++ interpreter of ROOT: dynamic table views, handling of physics object collections, and filtering of objects within physics collections.

        Speakers: Alja Mrak Tadel (Univ. of California San Diego (US)), Matevz Tadel (Univ. of California San Diego (US)), Serguei Linev (GSI Darmstadt)
      • 10:17
        RNTuple – Helping to meet the HL-LHC Storage Challenge 1m
        Speakers: Giacomo Parolini (CERN), Jakob Blomer (CERN)
      • 10:17
        ROOT viewer for CERNBox 1m

        CERNBox is CERN’s cloud storage and collaboration service, supporting a wide range of physics, general-purpose data and personal files for the CERN community. Despite this broad usage, ROOT files remain a significant part of the platform’s data ecosystem, accounting for a notable share of stored volume and an even larger proportion of read/write activity. Improving the ROOT user experience in CERNBox therefore became a priority for the CERNBox team.

        In collaboration with the ROOT Project, we developed a new ROOT file viewer that integrates JSROOT directly into the CERNBox Web interface. The work also led to enhancements in CERNBox’s backend (Reva) including support for HTTP range requests for efficient on-demand loading of large files.

        Speaker: Diogo Castro (CERN)
      • 10:17
        ROOT – Development process, CI and distribution 1m
        Speaker: Danilo Piparo (CERN)
      • 10:17
        Teaching ROOT 1m
        Speaker: Marta Czurylo (CERN)
      • 10:17
        The new Python Interface 1m
        Speakers: Aaron Jomy (CERN), Silia Taider (CERN), Vipul Nellamakada (Ramaiah University of Applied Sciences (IN))
      • 10:17
        UHI for ROOT: Interfacing With Python Statistical Analysis Libraries 1m
        Speaker: Silia Taider (CERN)
    • 10:30 11:00
      Coffee Break
    • 11:00 13:00
      Morning Session II: Analysis II
      • 11:00
        LHCb Data Analysis Towards Run 5: Challenges and Needs for ROOT 20m

        LHCb’s data analysis model is evolving rapidly in preparation for Run 5 and the HL-LHC era. This talk will outline how LHCb analysts perform end-to-end analyses, from data production to final physics results, and how these workflows differ from those of the general-purpose detectors. We will discuss the increasing demands on scalability, interoperability, and usability of analysis tools, highlighting specific challenges encountered with large-scale ntuple processing and distributed analysis. Finally, we will present key requirements and priorities for ROOT to better support LHCb’s future analysis workflows and ensure efficient physics exploitation at the HL-LHC.

        Speaker: Jiahui Zhuo (Univ. of Valencia and CSIC (ES))
      • 11:20
        Usage of EDM4hep datamodel in RDataFrame 20m

        The Future Circular Collider analysis framework, FCCAnalyses, heavily relies on the Key4hep provided datamodel: EDM4hep. The datamodel itself is described by a simple YAML file, and all required I/O classes are generated with the help of Podio, datamodel generator and I/O layer. One of the limitations of the Podio-based datamodel is that its internal structure, while providing convenient containerization of physics objects and resolution of relationships between them, has performance implications when running in the columnar parallelized environment of RDataFrame. The presentation will discuss the approaches undertaken when interfacing ROOT's RDataFrame with the EDM4hep datamodel.

        Speaker: Juraj Smiesko (CERN)
      • 11:40
        Discussion 20m
      • 12:00
        CMS FlashSim and ROOT RDataFrame for ML-Based Event Simulation 20m

        CMS is developing FlashSim, a machine learning–based framework that produces analysis-level (NANOAOD) events directly from generator-level inputs, reducing simulation costs by orders of magnitude. Efficient integration of preprocessing, inference, and output is essential, and ROOT RDataFrame provides the backbone of this workflow.

        Certain operations required for FlashSim are not yet part of the native RDataFrame API. These include event batching to optimize GPU utilization, efficient writing of ML-generated events through the RDataFrame interface, and support for oversampling to reuse inputs across multiple ML inference iterations. We implemented these features as custom extensions, but a native ROOT implementation would provide substantially better performance and scalability.

        We present FlashSim through a simplified demonstrator that illustrates these operations and motivates discussion with the ROOT community on possible solutions and future directions.

        Speaker: Filippo Cattafesta (Scuola Normale Superiore & INFN Pisa (IT))
      • 12:20
        Analyzing HL-LHC heavy-ion collision data with ALICE 20m

        Analyzing HL-LHC heavy-ion collision data with ALICE

        Victor Gonzalez, Wayne State University (US),
        on behalf of the ALICE Collaboration

        The ALICE detector has been taking data at heavy-ion HL-LHC regime since the start of the LHC Run 3 Pb--Pb campaign in October 2023. Recording Pb$-$Pb collisions at 50 kHz and pp collisions up to 1 MHz interaction rates without trigger results in processing data in real time at rates up to two orders of magnitude higher than during LHC Run 2. To reach that, ALICE underwent a major upgrade on all detectors to make use of the increased luminosity provided by the LHC. The Inner Tracking System now completely consists of Monolithic Active Pixel Sensors which improves pointing resolution. The Time Projection Chamber has been equipped with GEM-based readout chambers to support the continuous readout at the target interaction rate. New forward trigger detectors were installed to allow the clean identification of interactions. The computer infrastructure and the software framework have been completely redesigned for continuous readout, synchronous reconstruction, asynchronous reconstruction incorporating calibration, and a much more efficient and productive scenario for analyzing the considerable increase in the stored data. To reach this point the evolution of the new software framework required key design choices which now constitute the main characteristics of the ALICE Online-Offline, O$^2$, computing framework. In this session, the main structure of O$^2$ as well as its organized analysis infrastructure, Hyperloop train system, will be presented highlighting the features which support the HL-LHC scenario and the physics analysis scenario which provides effective, efficient, and productive access to the huge amount of collected data to the large community of analyzers that conforms the ALICE Collaboration.

        Speaker: Victor Gonzalez (Wayne State University (US))
      • 12:40
        Discussion 20m
    • 13:00 14:30
      Lunch Break
    • 14:30 16:30
      Afternoon session I: Maths and Statistical interpretation I
      • 14:30
        Statistical Inference in HEP - An Overview 20m
        Speaker: Jonas Eschle
      • 14:50
        RooFit overview 20m
        Speaker: Jonas Rembser (CERN)
      • 15:10
        Discussion 20m
      • 15:30
        Efficient Parameter Inference with MoreFit 20m

        Parameter inference via unbinned maximum likelihood fits is a central technique in particle physics. The large data samples available at the HL-LHC and advanced statistical methods require highly efficient fitting solutions. I will present one such solution, MoreFit, and discuss in detail the optimization techniques employed to make it as efficient as possible. MoreFit is based on compute graphs that are automatically optimized and compiled just in time. The inherent parallelism of the likelihood can be exploited on a wide range of platforms: GPUs can be utilized through an OpenCL backend, CPUs through a backend based on LLVM and Clang for single- or multithreaded exectution, which in addition allows for SIMD vectorization. Finally, I will discuss the resulting performance using some illustrative benchmarks and compare with several other fitting frameworks.

        Speaker: Christoph Michael Langenbruch (Heidelberg University (DE))
      • 15:50
        High performance analysis in CMS 20m

        The unprecedented volume of data and Monte Carlo simulations at the HL-LHC poses increasing challenges for particle physics analyses, demanding computation-efficient analysis workflows and reduced time to insight. We present a review of data and statistical analysis models and tools in CMS, with a particular emphasis on the challenges and solutions associated with the recent W mass measurement. We present a comprehensive analysis framework that leverages RDataFrame, Eigen, Boost Histograms, and the Python scientific ecosystem, with particular emphasis on the interoperability between ROOT and Python tools and output formats (ROOT and HDF5). Our implementation spans from initial event processing to final statistical interpretation, featuring optimizations in C++ and RDataFrame that achieve favorable performance scaling for billions of events. The framework incorporates interfaces to TensorFlow for fast and accurate complex multi-dimensional binned maximum likelihood calculations and robust minimization. We will discuss gaps and deficiencies in standard ROOT-based tools and workflows, how these were addressed with alternative integrated or standalone solutions, and possible directions for future improvement.

        Speaker: Josh Bendavid (CERN)
      • 16:10
        Discussion 20m
    • 16:30 17:00
      Poster session: continuation of the morning poster session at the afternoon coffee break
    • 18:00 22:30
      Social programme
    • 09:00 10:30
      Morning Session I
      • 09:00
        Innovative Services for Federated Infrastructures - Deployment and Execution in the Computing Continuum 30m
      • 09:30
        The FairRoot framework 30m

        FairRoot is a software framework for detector simulation, reconstruction, and data analysis
        developed at GSI for the experiments at the upcoming FAIR accelerator complex.
        Started as a framework for experiments at the FAIR project, it is meanwhile also used by several experiments outside of GSI.
        The framework is based on ROOT and Virtual Monte Carlo (VMC) and allows fast prototyping as well as stable usage.
        FairRoot provides basic services which can be adapted and extended by the users. This includes easy to use interfaces for I/O, geometry definition and handling to name only few.
        Modular reconstruction and/or analysis of simulated and/or real data can be implemented on the base of extended ROOT TTasks.
        The design as well as the possible usage of the framework will be discussed.

        Speaker: Florian Uhlig (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
      • 10:00
        The Utilization of ROOT in the BESIII Offline Software System 30m

        The BESIII experiment has been operating since 2009, to study physics in the $\tau$-charm energy region utilizing the high luminosity BEPCII (Beijing Electron-Positron Collider II) double ring collider. The BESIII Offline Software System (BOSS) is built upon the Gaudi framework, while also leveraging ROOT extensively across its various components.
        The BESIII experiment primarily utilizes ROOT for data management and storage. Most data, including simulated (rtraw), fully reconstructed (REC), and slimmed event data (DST), are stored in ROOT format, while RAW data remains in binary. The ROOT Conversion Service (RootCnvSvc), developed following Gaudi's specifications, enables bidirectional conversion between transient data objects (TDS) and persistent ROOT files. Calibration constants are serialized using ROOT and stored as BLOBs in databases, with functionalities for access provided by the BESIII Experiment Management Platform (BEMP). ROOT is also integral to validation workflows, where histogram-based comparisons serve as a critical benchmark for assessing both the functional accuracy and performance metrics of new software releases. Furthermore, the tag-based analysis software improves physics analysis efficiency by reorganizing DST events to boost the basket cache hit rate for non-contiguous data in ROOT. Many other tools, such as event display, also rely on ROOT.

        Speaker: Mr Jiaheng Zou (IHEP, Beijing)
    • 10:30 11:00
      Coffee Break
    • 11:00 13:00
      Morning Session II: I/O and Storage II
      • 11:00
        RNTuple advanced topics 20m
        Speaker: Giacomo Parolini (CERN)
      • 11:20
        Future of ROOT I/O 20m
        Speaker: Florine Willemijn de Geus (CERN/University of Twente (NL))
      • 11:40
        Discussion 20m
      • 12:00
        TDAnalyser - a modular framework for the analysis of test beam data 20m

        We present a new, modular framework for the processing of test beam data, and in particular for the R&D programme of future timing detectors.

        Based on a C++ architecture, it aims to normalise workflows for the analysis of detector performances through the definition of standard and user-defined analyses (e.g. time discrimination algorithms, intrinsic time resolution, or inter-channel correlations extraction).

        Its modular extension toolset gives analysts a platform for the definition of their workflow, including the definition of unpacking algorithms for oscilloscopes or DAQ-specific output formats, the combination of multiple sources into a global event content, or the extraction of calibration parameters from a fraction of datasets.

        With its RNTuple backbone for the management of the run- and event-granular payloads (including user-defined objects), it provides multiple I/O modules implementations for the preservation and standardisation of test beam datasets.

        Thanks to its TGeoManager-based geometry management system, it also allows the visual monitoring of channel occupancy, or the interfacing to external tracking algorithms to produce high-level information from simple waveforms or user-specific collections.

        In this talk, a large emphasis is given on the implementation of multiple ROOT-based solutions, including our experience on the usage of RNTuple models for the internal event data model management.

        Speaker: Laurent Forthomme (AGH University of Krakow (PL))
      • 12:20
        The ROOT 7 Release Series 20m
        Speaker: Stephan Hageboeck (CERN)
      • 12:40
        Discussion 20m
    • 13:00 14:30
      Lunch Break
    • 14:30 16:30
      Afternoon session I: Maths and Statistical interpretation II
      • 14:30
        Speeding up large-scale likelihood fits with Automatic Differentiation in RooFit 20m
        Speaker: Jonas Rembser (CERN)
      • 14:50
        Highlights of xRooFit - The High-Level API for RooFit 20m

        xRooFit, created in 2020 and integrated into ROOT as an experimental feature in 2022, is an API and toolkit designed to augment RooFit's existing functionalities. Designed to work with any RooFit workspace, xRooFit adds features to assist with workspace creation, exploration, visualization, and modification. It also includes a suite of functionality for statistical analysis, including NLL construction and minimization management, dataset generation, profile-likelihood test statistic evaluation, and hypothesis testing (including automated CLs limits with both toys and asymptotic formulae). xRooFit is designed to work with any workspace, no matter how it is created, and works in both C++ and python ecosystems. The primary objective of the xRooFit project is to help users build better, smarter, easier to understand statistical models, and to perform statistical analysis tasks more efficiently.

        I will present an introduction to xRooFit, highlighting some of its many functionalities, and showcase the ways that xRooFit has already been seamlessly integrated into many ATLAS statistical analysis workflows.

        Speaker: Will Buttinger (Science and Technology Facilities Council STFC (GB))
      • 15:10
        Discussion 20m
      • 15:30
        Statistical inference in CMS with Combine 20m
        Speaker: Aliya Nigamova (Paul Scherrer Institute (CH))
      • 15:50
        Statistical Inference in ATLAS 20m
        Speaker: Tomas Dado (CERN)
      • 16:10
        Discussion 20m
    • 16:30 17:00
      Coffee Break
    • 17:00 18:30
      Afternoon Session II: Scientific Python and Language interoperability II
      • 17:00
        Advancing Python-C++ Interoperability in ROOT and beyond 20m
        Speakers: Aaron Jomy (CERN), Vipul Nellamakada (Ramaiah University of Applied Sciences (IN))
      • 17:20
        ROOT.jl, opening ROOT to the Julia programming language 20m

        In a single programming language, Julia provides ease of programming (as Python), high running performance (as C++), and exceptional code reusability (with no comparison). These three properties make it the ideal programming language for high energy physics (HEP) data analysis, as demonstrated by several studies and confirmed by a growing interest from the HEP community. The ROOT.jl package provides a Julia API to the ROOT analysis framework. The number of covered ROOT classes is growing with each release. Its latest release covers the full Histogram and Geometry libraries, as well as support of TTree read/write, and graphic display (TCanvas, TBrowser, and products of the Geometry libraries). It supports the Julia interactive help, based on the ROOT reference manual content. The ROOT.jl features will be presented in this talk. How automatic generation of code and help content is used to minimize the maintenance effort in the integration of new ROOT releases will also be shown.

        Speaker: Philippe Gras (Université Paris-Saclay (FR))
      • 17:40
        Zero-overhead ML training with ROOT 20m
        Speaker: Martin Foll (University of Oslo (NO))
      • 18:00
        Discussion 20m
    • 09:00 10:30
      Morning Session I
      • 09:00
        Market Surveillance at Scale: A Deployed ROOT Framework for Financial Integrity 30m

        We bring a core technology from high-energy physics to financial market surveillance. The HighLO project uses the ROOT framework to process terabytes of high-speed trading data where conventional tools fail. In this presentation, we'll showcase our deployed platform and demonstrate how it equips regulators with powerful tools to protect market integrity.

        Speakers: Prof. Joost Pennings (Wageningen University), Dr Philippe Debie (Wageningen University)
      • 09:30
        LIGO-VIRGO-KAGRA - Bridging Gravitational Wave and High-Energy Physics Software 30m

        Gravitational Wave (GW) Physics has entered a new era of Multi-Messenger Astronomy (MMA), characterized by increasing GW event detections from GW observatories at the LIGO-Virgo-KAGRA collaborations. This presentation will introduce the KAGRA experiment, outlining the current workflow from data collection to physics interpretation, and demonstrate the transformative role of machine learning (ML) in some GW data analysis.

        This talk also bridge advancements in computational techniques between fundamental research in Astrophysics and High-Energy Physics (HEP). Such initiatives may find some common interests in the context of next generation trigger systems in HEP and advanced signal processing. Innovative solutions for addressing next-generation data analysis challenges will be presented, with a focus on the use of modern ML tools within the ROOT C++ Framework (CERN) and introducing Anaconda HEP-Forge for rapid software deployments. These tools, available as an additional shared libraries in ROOT, integrate key requirements for typical astrophysical analysis, but also HEP physics analysis — such as complex filtering, vector manipulation, KAFKA & other Cloud data transfers, and complex tensor computations on both CPU and GPU technologies.

        Speaker: Marco Meyer-Conde (Tokyo City University (JP), University Of Illinois (US))
      • 10:00
        The outside view on ROOT 15m

        As a physicist devoted to medical physics research, I've never fitted a Higgs search plot nor run worldwide distributed analysis of data taken at CERN. Unexpectedly, I am heavy user of ROOT, however for more mundane applications related to my research with high count rate radiation detectors, as well as for teaching, at a small laboratory within CSIC/University of Valencia.
        Thus, I would like to present the view on ROOT from outside the CERN ecosystem, a potentially much larger user base. I will highlight the key strengths of ROOT for this kind of community, the main hurdles discouraging its cheerful adoption by newbie students, and identify the main aspects that I consider a priority to improve.
        Naturally, these might not align with the priorities in the HEP software roadmap. However, I consider that a broad adoption of ROOT and finding a balance between inside-CERN versus outsider support pays back: it increases the likelihood of bug detection, feedback and number of contributors volunteering to improve this excellent open-source software.

        Speaker: FERNANDO HUESO GONZALEZ
      • 10:15
        Beyond HEP: ROOT for Solar Resources 15m

        ROOT’s largest user base is in High-Energy Physics and consequently, most of its functionalities cater to this field. This, however, does not mean ROOT has limited or no place in other areas; its many powerful capabilities such as data storage in compressed binary files, data analysis, modelling and simulation, fitting, data display and even machine learning can be exploited in most other fields of science and technology. Astrophysics and Medical Physics are known areas where ROOT is employed, and a few finance applications show up occasionally in the ROOT Forum (https://root-forum.cern.ch/search?q=finance%20order%3Alatest).
        In this poster I will show examples of the use (and needs) of ROOT for over a decade in the field of solar resource assessment, that is, the analysis of solar radiation and other atmospheric phenomena that affect the available solar energy on the surface of the planet, for applications such as solar power production and climate studies.

        Speaker: Daniel Perez Astudillo
    • 10:30 11:00
      Coffee Break
    • 11:00 12:15
      Morning Session II
      • 11:00
        JSRoot in Web Applications for High-Energy Physics: From Interactive Visualizations to Particle Transport Simulations 15m

        This contribution highlights practical uses of ROOT and JSROOT for data exploration and web-based visualization. We show performance improvements achieved by replacing matplotlib with JSROOT in the CalibView app for the CMS PPS project. A key feature is the partial reading of ROOT files and handling plot data as JSON objects.

        We also present a lightweight approach using static websites served with GitHub Pages. By linking to public ROOT files, these pages can visualize large datasets controlled by URL parameters. We compare this method with WebAssembly HDF5 readers, which lack partial read support.

        Finally, we discuss JSROOT as the visualization layer of YAPTIDE, a web framework for particle transport simulations. We share insights on user experience and integration within large React-based applications.

        Lessons learned from both user and developer perspectives will be shared, including recent developments in uproot that complement web-based workflows.

        Speaker: Leszek Grzanka (AGH University of Krakow (PL))
      • 11:15
        CARTopiaX: an Agent-Based Simulation of CAR T-Cell Therapy built with ROOT and BioDynaMo 15m

        CAR T-cell therapy has revolutionized cancer immunotherapy by reprogramming patient T cells to recognize and eliminate malignant cells, achieving remarkable success in hematologic cancers. However, its application to solid tumors remains challenging due to the complexity and heterogeneity of the tumor microenvironment. CARTopiaX is an advanced agent-based model developed on BioDynaMo, an open-source, high-performance platform for large-scale biological simulations that incorporates ROOT. The model enables detailed exploration of interactions between CAR T-cells and solid tumor microenvironments, supporting hypothesis testing and data-driven discovery. By combining biological accuracy with computational efficiency and scalability, CARTopiaX provides researchers with a powerful tool to investigate CAR T-cell dynamics in silico, accelerating scientific progress and reducing reliance on costly and time-consuming experimental approaches.

        Speaker: Salvador de la Torre Gonzalez
      • 11:30
        Workshop discussions and closing 45m
        Speaker: Danilo Piparo (CERN)