ROOT Users Workshop 2025
The 13th ROOT Users Workshop will be held in Valencia, Spain. It is an occasion to discuss ROOT and its related activities today and to help shape the future of the project. The workshop features four and a half days of presentations, discussions, interactions, tutorials, and everything related to ROOT and its various interactions with other HENP software projects. As the end of Run 3 approaches, this workshop is also an opportunity to reflect on how the ROOT project can help address the future computing challenges that HL-LHC and other scientific experiments will provide. In particular, there will be four main topical areas of interest: Analysis, I/O & Storage, Math & Stats, and Scientific Python Ecosystem. The event is foreseen to be in-person only to promote social interactions and the exchange of ideas. To celebrate the community spirit, the event will be complemented by a social dinner and a guided tour of the city of Valencia.
The participation of students is highly valued and encouraged. Hence, a limited number of reduced fee registrations are available for this category. Make sure to register soon to secure your spot!
Event partners
-
-
1
Workshop registration
-
Morning Session I
-
2
Welcome from the organisers of the workshop
-
3
Welcome from the local organisers
-
4
An overview of the ROOT projectSpeaker: Danilo Piparo (CERN)
-
5
Physics at the HL-LHC scale
-
2
-
Coffee Break
-
Morning Session II
-
6
ALICESpeaker: Sandro Christian Wenzel (CERN)
-
7
ATLAS
-
8
CMSSpeaker: Philippe Canal (Fermi National Accelerator Lab. (US))
-
9
ROOT in the LHCb Software Framework - past, present and future
The LHCb Collaboration has been using ROOT in its software framework from the beginning, in more or less efficient way. Here summarize the evolution of our use of ROOT to then glimpse into the possible paths in view of LHC Run4 and LHCb Upgrade 2.
Speaker: Marco Clemencic (CERN)
-
6
-
Lunch Break
-
Afternoon session I
-
10
Computing in the 1990sSpeaker: Rene Brun
-
11
30 years of ROOTSpeaker: Fons Rademakers (CERN)
-
10
-
Afternoon session I: Training
-
12
ROOT Training Session
-
12
-
Coffee Break
-
Afternoon Session II: Training
-
13
Implementation of an $H\rightarrow \tau\tau$ Analysis within a ROOT-based C++ Framework using ATLAS Open Data
Integration of an analysis of Higgs boson decaying to a pair of tau leptons into a ROOT-based C++ framework, making use of recent ATLAS Open Data release. The workflow demonstrates how modern analysis strategies can be easily implemented within a modular structure that combines event selection and object reconstruction. Particular emphasis is placed on the use of ROOT for handling large datasets and producing complex histograms. The framework is designed to be accessible to students and researchers, providing an educational platform while remaining scalable for advanced physics studies. Results are illustrated with reconstructed di-tau invariant mass distributions, highlighting the pedagogical value of the ATLAS Open Data resources using ROOT.
Speaker: Morvan Vincent (Univ. of Valencia and CSIC (ES)) -
14
ROOT software training
-
13
-
1
-
-
Morning Session I
-
15
WLCG
-
16
Behind the "root://": Current state and future challenges of Physics data storage at scaleSpeaker: Cedric Caffy (CERN)
-
17
FCCAnalyses: a ROOT-based Framework for End-to-End Physics Analysis at the FCC
We present an overview of the FCC analysis framework, designed to streamline user workflows from event processing to final results. Built on top of ROOT’s RDataFrame and the EDM4hep data model, FCCAnalyses provides a coherent environment for dataset processing, visualization, plotting, and statistical fitting. We will highlight the full analysis chain from the user perspective, including different execution modes, distributed computing integration (HTCondor, Slurm, Dask Gateway), and technical integrations such as TMVA and machine learning interfaces. The presentation will conclude with a discussion of current challenges and planned improvements.
Speakers: Jan Eysermans (Massachusetts Inst. of Technology (US)), Juraj Smiesko (CERN)
-
15
-
Coffee Break
-
Morning Session II: Analysis I
-
18
Overview of the current RDataFrame effortsSpeakers: Marta Czurylo (CERN), Stephan Hageboeck (CERN), Dr Vincenzo Eduardo Padulano (CERN)
-
19
Benchmarking ROOT-Based Analysis Workflows for HL-LHC: The CMS INFN AF Use Case
For a few years, INFN has been developing R&D projects in the context of CMS high-rate analyses in view of the challenges of the HL-LHC phase, starting in 2030. Since then, several studies have been implemented and as a result a prototype for an analysis facility has been proposed, in compliance with CMS analysis frameworks (e.g. based on Coffea, ROOT RDataFrame), and also adopting open-source industry standards for flexibility and ease of use.
All this gave the opportunity to implement systematic studies on ROOT-based workflows at scale. In this context, we present a concrete example of an analysis exploiting ROOT RDataFrame.
Recently, new opportunities in Italy have opened up the possibility of enhancing the benchmarking approach, towards specialised resources (named HPC bubbles) hosted at INFN Padova, featuring three types of heterogeneous nodes: CPU (192 cores, 1.5TB RAM, NVMe) and GPU (4 $\times$ NVIDIA H100 80GB). Preliminary results and future plans for detailed studies on metrics such as event rate, I/O operations, energy efficiency, as well as performance comparisons between TTree and RNTuple, and finally validation with a real CMS analysis will be given.
Speaker: Luca Pacioselli (INFN, Perugia (IT)) -
20
Discussion
-
21
Vibe Plotting: ServiceX, RDataFrame, and a LLM
Can we teach an LLM to plot experimental HEP data? Modern particle physics workflows increasingly depend on a complex software ecosystem that connects large datasets, distributed data delivery, and user-level analysis tools. We demonstrate how a Large Language Model (LLM) can act as a coding assistant that bridges these components. Starting from a high-level user request—such as “plot jet transverse momentum in dataset X”—the LLM generates code that orchestrates ServiceX for columnar data delivery, configures RDataFrame operations, and produces the plot. The system dynamically adapts code generation to experimental conventions and schema variations, reducing the barrier to entry. We present examples of successful end-to-end workflows, discuss failure modes and reliability issues, and outline the challenges of integrating such assistants into production environments.
Speaker: Gordon Watts (University of Washington (US)) -
22
NDMSPC: Addressing THnSparse Challenges in High-Dimensional Analysis
The NDMSPC project introduces an innovative THnSparse analysis framework for high-dimensional data, directly addressing the challenge of fitting N-dimensional histograms within memory. Our solution organizes the THnSparse structure into sub-chunks, optimizing memory efficiency and enabling scalable processing. Beyond numerical values, the framework allows bin content to be arbitrary objects, greatly expanding data representation capabilities. A key feature is the ability to execute user-defined jobs for each bin, providing powerful and flexible custom analysis. This presentation will highlight the NDMSPC framework's design, benefits, and applications in advanced scientific data analysis.
Speaker: Martin Vala (Pavol Jozef Safarik University (SK)) -
23
Discussion
-
18
-
Lunch Break
-
Afternoon session I: I/O and Storage I
-
24
ROOT I/O Status OverviewSpeaker: Jakob Blomer (CERN)
-
25
A first look at RFileSpeaker: Giacomo Parolini (CERN)
-
26
Discussion
-
27
Performance Comparison of Lossless Compression Algorithms on CMS Data using ROOT TTree and RNTuple
The High-Level Trigger (HLT) of the Compact Muon Solenoid (CMS) processes event data in real time, applying selection criteria to reduce the data rate from hundreds of kHz to around 5 kHz for raw data offline storage. Efficient lossless compression algorithms, such as LZMA and ZSTD, are essential in minimizing these storage requirements while maintaining easy access for subsequent analysis. Multiple compression techniques are currently employed in the experiment's trigger system. In this study, we benchmark the performance of existing lossless compression algorithms used in HLT for RAW data storage, evaluating their efficiency in terms of compression ratio, processing time, and CPU/memory usage. In addition, we investigate the potential improvements given by the introduction, in the CMSSW software framework, of RNTuples: the next-generation data storage format developed within the ROOT ecosystem. We explore how the new format can enhance data read efficiency and reduce storage footprints compared to the traditional format. With the upcoming Phase-2 upgrade of the CMS experiment, efficient compression strategies will be essential to ensure sustainable data processing and storage capabilities. This work provides insights into the different compression algorithms and how the new RNTuples data format can contribute to addressing the future data challenges of the CMS experiment.
Speaker: Simone Rossi Tisbeni (Universita Di Bologna (IT)) -
28
User Story: Integration of ROOT RNTuple to CMSSW's SoA data structures
The Struct of Arrays (SoA) layout separates structure fields into individual arrays, each holding a single attribute across all elements. Compared to the traditional Array of Structures (AoS), SoA improves data locality, vectorisation, cache usage, and memory bandwidth—critical for heterogeneous computing. In CMSSW, SoA is implemented using the Boost::PP library to provide a clean, user-friendly interface. This work outlines CMSSW’s SoA features, their integration with ROOT RNTuple, and highlights recent developments and planned extensions. We also present CMSSW’s requirements for the RNTuple format moving forward.
Speaker: Markus Holzer (CERN) -
29
Discussion
-
24
-
Coffee Break
-
Afternoon Session II: Scientific Python and Language interoperability
-
30
An overview of ROOT in Python
-
31
The ROOT Python bindings featuresSpeaker: Silia Taider (CERN)
-
32
Compiler Research (provisional title)Speaker: Vassil Vasilev (Princeton University (US))
-
33
Discussion
-
30
-
-
-
Morning Session I
-
34
The View from Deep Underground: DUNE’s Perspective on ROOT
The Deep Underground Neutrino Experiment (DUNE) plans to collect physics
data for over 10 years, starting in 2029. The full DUNE design consists of four far
detectors with multiple 10 kT fiducial mass LArTPCs and a heterogeneous near
detector complex with 1300 km between them. This combination of technologies
and readout time scales is expected to require large storage volumes with on the
order of several GBs stored per readout window in the far detector alone. DUNE
currently relies on ROOT for a columnar binary data storage format, geometry
modelling, fitting algorithms, and more. Before the start of data taking, DUNE
is replacing its art framework with the new Phlex framework, with RNTuple
support through the FORM I/O toolkit. Additionally, DUNE analysis tools
continue to integrate more use of scientific python where possible. Therefore,
the development of ROOT 7 is particularly timely for DUNE. This presentation
will survey how ROOT performs in DUNE’s current software stack and where
the future of ROOT can have a meaningful impact on this experiment.Speaker: Andrew Paul Olivier (Argonne National Laboratory) -
35
Introduction to the JUNO offline data processing software and its event data model
The Jiangmen Underground Neutrino Observatory (JUNO) is a next-generation neutrino experiment in south China currently in the commissioning stage. JUNO’s primary objective is to determine the neutrino mass ordering (NMO) mainly by detecting reactor antineutrinos.
Unlike typical accelerator experiments, the JUNO experiment typically detects physical events (such as the inverse beta decay of reactor antineutrinos) that generate multiple trigger signals correlated in time. This feature has caused extra complexity when designing its offline data processing software, especially when time-correlated analysis plays a vital role analyzing JUNO data.
This contribution will briefly introduce the design of the JUNO offline software (JUNOSW) that implements the offline data processing and MC simulation of JUNO. In particular, we will introduce JUNOSW's event data model (EDM) based on TObject that comprehensively correlates event objects by implementing a smart pointer using ROOT's TProcessID mechanism.
Speaker: Dr Teng LI (Shandong University, CN) -
36
Market Surveillance at Scale: A Deployed ROOT Framework for Financial Integrity
We bring a core technology from high-energy physics to financial market surveillance. The HighLO project uses the ROOT framework to process terabytes of high-speed trading data where conventional tools fail. In this presentation, we'll showcase our deployed platform and demonstrate how it equips regulators with powerful tools to protect market integrity.
Speakers: Prof. Joost Pennings (Wageningen University), Dr Philippe Debie (Wageningen University)
-
34
-
Coffee Break
-
Morning Session II: Analysis II
-
37
Future of RDataFrame - R&D projects ideasSpeakers: Marta Czurylo (CERN), Stephan Hageboeck (CERN), Dr Vincenzo Eduardo Padulano (CERN)
-
38
Usage of EDM4hep datamodel in RDataFrame
The Future Circular Collider analysis framework, FCCAnalyses, heavily relies on the Key4hep provided datamodel: EDM4hep. The datamodel itself is described by a simple YAML file, and all required I/O classes are generated with the help of Podio, datamodel generator and I/O layer. One of the limitations of the Podio-based datamodel is that its internal structure, while providing convenient containerization of physics objects and resolution of relationships between them, has performance implications when running in the columnar parallelized environment of RDataFrame. The presentation will discuss the approaches undertaken when interfacing ROOT's RDataFrame with the EDM4hep datamodel.
Speakers: Jan Eysermans (Massachusetts Inst. of Technology (US)), Juraj Smiesko (CERN) -
39
Discussion
-
40
CMS FlashSim and ROOT RDataFrame for ML-Based Event Simulation
CMS is developing FlashSim, a machine learning–based framework that produces analysis-level (NANOAOD) events directly from generator-level inputs, reducing simulation costs by orders of magnitude. Efficient integration of preprocessing, inference, and output is essential, and ROOT RDataFrame provides the backbone of this workflow.
Certain operations required for FlashSim are not yet part of the native RDataFrame API. These include event batching to optimize GPU utilization, efficient writing of ML-generated events through the RDataFrame interface, and support for oversampling to reuse inputs across multiple ML inference iterations. We implemented these features as custom extensions, but a native ROOT implementation would provide substantially better performance and scalability.
We present FlashSim through a simplified demonstrator that illustrates these operations and motivates discussion with the ROOT community on possible solutions and future directions.
Speaker: Filippo Cattafesta (Scuola Normale Superiore & INFN Pisa (IT)) -
41
Analyzing HL-LHC heavy-ion collision data with ALICE
Analyzing HL-LHC heavy-ion collision data with ALICE
Victor Gonzalez, Wayne State University (US),
on behalf of the ALICE CollaborationThe ALICE detector has been taking data at heavy-ion HL-LHC regime since the start of the LHC Run 3 Pb--Pb campaign in October 2023. Recording Pb$-$Pb collisions at 50 kHz and pp collisions up to 1 MHz interaction rates without trigger results in processing data in real time at rates up to two orders of magnitude higher than during LHC Run 2. To reach that, ALICE underwent a major upgrade on all detectors to make use of the increased luminosity provided by the LHC. The Inner Tracking System now completely consists of Monolithic Active Pixel Sensors which improves pointing resolution. The Time Projection Chamber has been equipped with GEM-based readout chambers to support the continuous readout at the target interaction rate. New forward trigger detectors were installed to allow the clean identification of interactions. The computer infrastructure and the software framework have been completely redesigned for continuous readout, synchronous reconstruction, asynchronous reconstruction incorporating calibration, and a much more efficient and productive scenario for analyzing the considerable increase in the stored data. To reach this point the evolution of the new software framework required key design choices which now constitute the main characteristics of the ALICE Online-Offline, O$^2$, computing framework. In this session, the main structure of O$^2$ as well as its organized analysis infrastructure, Hyperloop train system, will be presented highlighting the features which support the HL-LHC scenario and the physics analysis scenario which provides effective, efficient, and productive access to the huge amount of collected data to the large community of analyzers that conforms the ALICE Collaboration.
Speaker: Victor Gonzalez (Wayne State University (US)) -
42
Discussion
-
37
-
Lunch Break
-
Afternoon session I: Maths and Statistical interpretation I
-
43
RooFit overviewSpeaker: Jonas Rembser (CERN)
-
44
A look at statistical inference in HEP (Provisional title)Speaker: Jonas Eschle
-
45
Discussion
-
46
Efficient Parameter Inference with MoreFit
Parameter inference via unbinned maximum likelihood fits is a central technique in particle physics. The large data samples available at the HL-LHC and advanced statistical methods require highly efficient fitting solutions. I will present one such solution, MoreFit, and discuss in detail the optimization techniques employed to make it as efficient as possible. MoreFit is based on compute graphs that are automatically optimized and compiled just in time. The inherent parallelism of the likelihood can be exploited on a wide range of platforms: GPUs can be utilized through an OpenCL backend, CPUs through a backend based on LLVM and Clang for single- or multithreaded exectution, which in addition allows for SIMD vectorization. Finally, I will discuss the resulting performance using some illustrative benchmarks and compare with several other fitting frameworks.
Speaker: Christoph Michael Langenbruch (Heidelberg University (DE)) -
47
High performance analysis in CMS
The unprecedented volume of data and Monte Carlo simulations at the HL-LHC poses increasing challenges for particle physics analyses, demanding computation-efficient analysis workflows and reduced time to insight. We present a review of data and statistical analysis models and tools in CMS, with a particular emphasis on the challenges and solutions associated with the recent W mass measurement. We present a comprehensive analysis framework that leverages RDataFrame, Eigen, Boost Histograms, and the Python scientific ecosystem, with particular emphasis on the interoperability between ROOT and Python tools and output formats (ROOT and HDF5). Our implementation spans from initial event processing to final statistical interpretation, featuring optimizations in C++ and RDataFrame that achieve favorable performance scaling for billions of events. The framework incorporates interfaces to TensorFlow for fast and accurate complex multi-dimensional binned maximum likelihood calculations and robust minimization. We will discuss gaps and deficiencies in standard ROOT-based tools and workflows, how these were addressed with alternative integrated or standalone solutions, and possible directions for future improvement.
Speaker: Josh Bendavid (CERN) -
48
Discussion
-
43
-
-
-
Morning Session I
-
49
Innovative Services for Federated Infrastructures - Deployment and Execution in the Computing Continuum
-
50
The FairRoot framework
FairRoot is a software framework for detector simulation, reconstruction, and data analysis
developed at GSI for the experiments at the upcoming FAIR accelerator complex.
Started as a framework for experiments at the FAIR project, it is meanwhile also used by several experiments outside of GSI.
The framework is based on ROOT and Virtual Monte Carlo (VMC) and allows fast prototyping as well as stable usage.
FairRoot provides basic services which can be adapted and extended by the users. This includes easy to use interfaces for I/O, geometry definition and handling to name only few.
Modular reconstruction and/or analysis of simulated and/or real data can be implemented on the base of extended ROOT TTasks.
The design as well as the possible usage of the framework will be discussed.Speaker: Florian Uhlig (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE)) -
51
The Utilization of ROOT in the BESIII Offline Software System
The BESIII experiment has been operating since 2009, to study physics in the $\tau$-charm energy region utilizing the high luminosity BEPCII (Beijing Electron-Positron Collider II) double ring collider. The BESIII Offline Software System (BOSS) is built upon the Gaudi framework, while also leveraging ROOT extensively across its various components.
The BESIII experiment primarily utilizes ROOT for data management and storage. Most data, including simulated (rtraw), fully reconstructed (REC), and slimmed event data (DST), are stored in ROOT format, while RAW data remains in binary. The ROOT Conversion Service (RootCnvSvc), developed following Gaudi's specifications, enables bidirectional conversion between transient data objects (TDS) and persistent ROOT files. Calibration constants are serialized using ROOT and stored as BLOBs in databases, with functionalities for access provided by the BESIII Experiment Management Platform (BEMP). ROOT is also integral to validation workflows, where histogram-based comparisons serve as a critical benchmark for assessing both the functional accuracy and performance metrics of new software releases. Furthermore, the tag-based analysis software improves physics analysis efficiency by reorganizing DST events to boost the basket cache hit rate for non-contiguous data in ROOT. Many other tools, such as event display, also rely on ROOT.Speaker: Mr Jiaheng Zou (IHEP, Beijing)
-
49
-
Coffee Break
-
Morning Session II: I/O and Storage II
-
52
RNTuple advanced topics
-
53
Future of ROOT I/O
-
54
Discussion
-
55
TDAnalyser - a modular framework for the analysis of test beam data
We present a new, modular framework for the processing of test beam data, and in particular for the R&D programme of future timing detectors.
Based on a C++ architecture, it aims to normalise workflows for the analysis of detector performances through the definition of standard and user-defined analyses (e.g. time discrimination algorithms, intrinsic time resolution, or inter-channel correlations extraction).
Its modular extension toolset gives analysts a platform for the definition of their workflow, including the definition of unpacking algorithms for oscilloscopes or DAQ-specific output formats, the combination of multiple sources into a global event content, or the extraction of calibration parameters from a fraction of datasets.
With its
RNTuple
backbone for the management of the run- and event-granular payloads (including user-defined objects), it provides multiple I/O modules implementations for the preservation and standardisation of test beam datasets.Thanks to its
TGeoManager
-based geometry management system, it also allows the visual monitoring of channel occupancy, or the interfacing to external tracking algorithms to produce high-level information from simple waveforms or user-specific collections.In this talk, a large emphasis is given on the implementation of multiple ROOT-based solutions, including our experience on the usage of
RNTuple
models for the internal event data model management.Speaker: Laurent Forthomme (AGH University of Krakow (PL)) -
56
FORM: Flexible I/O and Storage for the Deep Underground Neutrino Experiment
The Deep Underground Neutrino Experiment (DUNE) will deploy four 10 kt
fiducial mass time projection chambers in order to study accelerator neutrinos,supernova neutrinos, beyond the standard model physics, atmospheric neutrinos,and solar neutrinos. Reconstructing data in varying time domains over the nearly 400,000 channels needed to monitor this volume presents the complex challenge of processing this information in flexible data-grain sizes suitable for affordable compute nodes. Phlex, a framework under development, will address this challenge using new I/O and storage infrastructure called the Fine-grained Object Reading/writing Model (FORM). In addition to providing flexible, grained I/O, FORM will support several storage technologies, including ROOT’s legacy TTree and its replacement RNTuple. This poster will show first results for RNTuple storage for data of varying grain sizes.Speaker: Andrew Paul Olivier (Argonne National Laboratory) -
57
Discussion
-
52
-
Lunch Break
-
Afternoon session I: Maths and Statistical interpretation II
-
58
Talk 1
-
59
Highlights of xRooFit - The High-Level API for RooFit
xRooFit, created in 2020 and integrated into ROOT as an experimental feature in 2022, is an API and toolkit designed to augment RooFit's existing functionalities. Designed to work with any RooFit workspace, xRooFit adds features to assist with workspace creation, exploration, visualization, and modification. It also includes a suite of functionality for statistical analysis, including NLL construction and minimization management, dataset generation, profile-likelihood test statistic evaluation, and hypothesis testing (including automated CLs limits with both toys and asymptotic formulae). xRooFit is designed to work with any workspace, no matter how it is created, and works in both C++ and python ecosystems. The primary objective of the xRooFit project is to help users build better, smarter, easier to understand statistical models, and to perform statistical analysis tasks more efficiently.
I will present an introduction to xRooFit, highlighting some of its many functionalities, and showcase the ways that xRooFit has already been seamlessly integrated into many ATLAS statistical analysis workflows.
Speaker: Will Buttinger (Science and Technology Facilities Council STFC (GB)) -
60
Discussion
-
61
CMS Combine (provisional title)Speaker: Aliya Nigamova (Paul Scherrer Institute (CH))
-
62
ATLAS statistical inference (provisional title)Speaker: Tomas Dado (CERN)
-
63
Discussion
-
58
-
Coffee Break
-
Afternoon Session II: Scientific Python and Language interoperability II
-
64
The software stack for generic dynamic Python bindingsSpeaker: Aaron Jomy (CERN)
-
65
ROOT.jl, opening ROOT to the Julia programming language
In a single programming language, Julia provides ease of programming (as Python), high running performance (as C++), and exceptional code reusability (with no comparison). These three properties make it the ideal programming language for high energy physics (HEP) data analysis, as demonstrated by several studies and confirmed by a growing interest from the HEP community. The ROOT.jl package provides a Julia API to the ROOT analysis framework. The number of covered ROOT classes is growing with each release. Its latest release covers the full Histogram and Geometry libraries, as well as support of TTree read/write, and graphic display (TCanvas, TBrowser, and products of the Geometry libraries). It supports the Julia interactive help, based on the ROOT reference manual content. The ROOT.jl features will be presented in this talk. How automatic generation of code and help content is used to minimize the maintenance effort in the integration of new ROOT releases will also be shown.
Speaker: Philippe Gras (Université Paris-Saclay (FR)) -
66
Hands-on Pythonisations tutorial
-
67
Discussion
-
64
-
-
-
Morning Session I
-
68
Desiging and evolving a (Py)ROOT based software framework for SHiP
As a low background experiment at the intensity frontier, the SHiP experiment faces many unique computing challenges compared to collider experiments. At the same time, the experiment is small, with very limited person power, and everything is evolving quickly in preparation for the TDRs, including core parts of the software, which need to be replaced while avoiding disruption to future-proof the framework for the coming two decades.
ROOT and especially PyROOT are core parts of our framework and allow us to present a consistent, pythonic and stable interface to the packages used in the framework, abstracting individual components from the user.
This talk will give an overview of the computing challenges, future directions and the current and future use of ROOT within the collaboration.Speaker: Oliver Lantwin (Universitaet Siegen (DE)) -
69
Bridging Gravitational Wave and High-Energy Physics Software
Gravitational Wave (GW) Physics has entered a new era of Multi-Messenger Astronomy (MMA), characterized by increasing GW event detections from GW observatories at the LIGO-Virgo-KAGRA collaborations. This presentation will introduce the KAGRA experiment, outlining the current workflow from data collection to physics interpretation, and demonstrate the transformative role of machine learning (ML) in some GW data analysis.
This talk also bridge advancements in computational techniques between fundamental research in Astrophysics and High-Energy Physics (HEP). Such initiatives may find some common interests in the context of next generation trigger systems in HEP and advanced signal processing. Innovative solutions for addressing next-generation data analysis challenges will be presented, with a focus on the use of modern ML tools within the ROOT C++ Framework (CERN) and introducing Anaconda HEP-Forge for rapid software deployments. These tools, available as an additional shared libraries in ROOT, integrate key requirements for typical astrophysical analysis, but also HEP physics analysis — such as complex filtering, vector manipulation, KAFKA & other Cloud data transfers, and complex tensor computations on both CPU and GPU technologies.
Speaker: Marco Meyer-Conde (Tokyo City University (JP), University Of Illinois (US)) -
70
The outside view on ROOT
As a physicist devoted to medical physics research, I've never fitted a Higgs search plot nor run worldwide distributed analysis of data taken at CERN. Unexpectedly, I am heavy user of ROOT, however for more mundane applications related to my research with high count rate radiation detectors, as well as for teaching, at a small laboratory within CSIC/University of Valencia.
Thus, I would like to present the view on ROOT from outside the CERN ecosystem, a potentially much larger user base. I will highlight the key strengths of ROOT for this kind of community, the main hurdles discouraging its cheerful adoption by newbie students, and identify the main aspects that I consider a priority to improve.
Naturally, these might not align with the priorities in the HEP software roadmap. However, I consider that a broad adoption of ROOT and finding a balance between inside-CERN versus outsider support pays back: it increases the likelihood of bug detection, feedback and number of contributors volunteering to improve this excellent open-source software.Speaker: FERNANDO HUESO GONZALEZ -
71
Beyond HEP: ROOT for Solar Resources
ROOT’s largest user base is in High-Energy Physics and consequently, most of its functionalities cater to this field. This, however, does not mean ROOT has limited or no place in other areas; its many powerful capabilities such as data storage in compressed binary files, data analysis, modelling and simulation, fitting, data display and even machine learning can be exploited in most other fields of science and technology. Astrophysics and Medical Physics are known areas where ROOT is employed, and a few finance applications show up occasionally in the ROOT Forum (https://root-forum.cern.ch/search?q=finance%20order%3Alatest).
In this poster I will show examples of the use (and needs) of ROOT for over a decade in the field of solar resource assessment, that is, the analysis of solar radiation and other atmospheric phenomena that affect the available solar energy on the surface of the planet, for applications such as solar power production and climate studies.Speaker: Daniel Perez Astudillo
-
68
-
Coffee Break
-
Morning Session II
-
72
JSRoot in Web Applications for High-Energy Physics: From Interactive Visualizations to Particle Transport Simulations
This contribution highlights practical uses of ROOT and JSROOT for data exploration and web-based visualization. We show performance improvements achieved by replacing matplotlib with JSROOT in the CalibView app for the CMS PPS project. A key feature is the partial reading of ROOT files and handling plot data as JSON objects.
We also present a lightweight approach using static websites served with GitHub Pages. By linking to public ROOT files, these pages can visualize large datasets controlled by URL parameters. We compare this method with WebAssembly HDF5 readers, which lack partial read support.
Finally, we discuss JSROOT as the visualization layer of YAPTIDE, a web framework for particle transport simulations. We share insights on user experience and integration within large React-based applications.
Lessons learned from both user and developer perspectives will be shared, including recent developments in uproot that complement web-based workflows.
Speaker: Leszek Grzanka (AGH University of Krakow (PL)) -
73
Workshop discussions and closing
-
72
-