Fifth MODE Workshop on Differentiable Programming for Experiment Design
OAC conference center, Kolymbari, Crete, Greece.
Location:
The workshop will take place at the OAC (https://www.oac.gr/en/) in Kolymbari, Crete (Greece). Information on the accommodation is available in a dedicated page.
At the same page you will find the procedure for young participants to ask for financial support: we have limited availability to cover part of the travel expenses and waiving of the conference fee for some selected young participants. Priority will be given to participants submitting abstracts for talks/posters.
Remote attendance is not foreseen. We want to create the spirit of a scientific retreat, where serendipitous conversations lead to new ideas and collaborations.
Minimal schedule:
- 8 June 2025: arrival day (evening, includes dinner)
- 9--12 June 2025: workshop sessions
- 13 June 2025: departure day (morning, includes breakfast)
Please account for different timezones when consulting the timetable. In particular, Greece is on Eastern European timezone (GMT+2).
Registration and abstract submission:
Please register using the links in the menu to the left.
New registrations close on May 1st.
Registrations must go into “complete” status (i.e. wire transfers received, or confirmation that you will pay by cash at the venue) by May 3.
Overview of the sessions:
-
Confirmed keynote speakers
-
Sarah Barnes (DLR): TBA
- Laurent Hascoet (INRIA): TBA
-
-
Lectures and tutorials:
-
Tutorial (TBC): Differentiable Programming, Gradient Descent in Many Dimensions, and Design Optimization (Pietro Vischia, Universidad de Oviedo and ICTEA)
-
-
Special events:
- Poster session: prizes will be given to the best posters!
-
Data Challenge: prizes will be given to the winners of the challenge!
- Methods and Tools
- Applications in Muon Tomography
- Applications in particle physics
- Applications in astro-HEP and neutrino physics
- Applications in nuclear physics
- Applications in medical physics and other fields
Prizes for special events
There will be a set of prizes for the hackathon, and one for the poster session.
Organising Committee:
You can get in touch with the organising commitee at mode-workshop-organizers@cern.ch.
- Muhammad Awais (INFN-Padova)
- Tommaso Dorigo (INFN-Padova)
- Andrea Giammanco (UCLouvain)
- Christian Glaser (Uppsala Universiteit)
- Lisa Kusch (TU Eindhoven)
- Gilles Louppe (ULiège)
- Pablo Martinez Ruiz del Árbol (Universidad de Cantabria)
- Pietro Vischia (Universidad de Oviedo and ICTEA)
- Gordon Watts (University of Washington)
- Zahraa Zaher (UCLouvain)
- Stéphanie Landrain (secretariat) (UCLouvain)
Scientific Advisory Committee:
- Atilim Gunes Baydin (University of Oxford)
- Kyle Cranmer (University of Wisconsin)
- Julien Donini (Université Clermont Auvergne)
- Piero Giubilato (Università di Padova)
- Gian Michele Innocenti (CERN)
- Michael Kagan (SLAC)
- Riccardo Rando (Università di Padova)
- Roberto Ruiz de Austri Bazan (IFIC-CSIC/UV)
- Kazuhiro Terao (SLAC)
- Andrey Ustyuzhanin (SIT, HSE Univ., NUS)
- Christoph Weniger (University of Amsterdam)
Funding agencies:
This workshop is partially supported by the joint ECFA-NuPECC-APPEC Activities (JENAA).
This workshop is partially supported by National Science Foundation grant PHY-2323298 (IRIS-HEP).
This workshop is partially supported by the Fund for Scientific Research (F.R.S.–FNRS)
-
-
15:00
→
20:00
Arrival day 5h
-
20:00
→
21:30
Dinner at OAC (included in the fee)
-
15:00
→
20:00
-
-
08:00
→
09:00
Breakfast at OAC (only for people with OAC accommodation) 1h
-
09:00
→
09:20
Registration
-
09:20
→
10:00
Introduction
-
09:20
Welcome by the OAC director 10m
-
09:30
Welcome and Introduction to the Workshop 25mSpeaker: Dr Pietro Vischia (Universidad de Oviedo and Instituto de Ciencias y Tecnologías Espaciales de Asturias (ICTEA))
-
09:55
Discussion 5m
-
09:20
-
10:00
→
11:00
Keynote sessionConvener: Dr Pietro Vischia (Universidad de Oviedo and Instituto de Ciencias y Tecnologías Espaciales de Asturias (ICTEA))
-
10:00
TBA 50mSpeaker: Sarah Barnes (Detusches Zentrum für Luft- und Raumfahrt e.V. (German Aerospace Center))
-
10:50
Discussion 10m
-
10:00
-
11:00
→
11:30
Coffee break (included in the fee) 30m
-
11:30
→
13:00
Applications in Particle PhysicsConvener: Dr Pietro Vischia (Universidad de Oviedo and Instituto de Ciencias y Tecnologías Espaciales de Asturias (ICTEA))
-
11:30
Unsupervised Particle Tracking with Neuromorphic Computing 25m
We study the application of a spiking neural network architecture for identifying charged particle trajectories via unsupervised learning of synaptic delays using a spike-time-dependent plasticity rule. In the considered model, the neurons receive time-encoded information on the position of particle hits in a tracking detector for a particle collider, modeled according to the geometry of the Compact Muon Solenoid Phase-2 detector. We show how a spiking neural network is capable of successfully identifying in a completely unsupervised way the signal left by charged particles in the presence of conspicuous noise from accidental or combinatorial hits, opening the way to applications of neuromorphic computing to particle tracking. The presented results motivate further studies investigating neuromorphic computing as a potential solution for real-time, low-power particle tracking in future high-energy physics experiments.
Speakers: Emanuele Coradin, Fabio Cufino, Tommaso Dorigo (INFN Padova, Luleå University of Technology, MODE Collaboration, Universal Scientific Education and Research Network) -
11:55
Discussion 5m
-
12:00
Neuromorphic Readout for Hadron Calorimeters 25m
In this work we simulate hadrons impinging on a homogeneous lead-tungstate (PbWO4) calorimeter to investigate how the resulting light yield and its temporal structure, as detected by an array of light-sensitive sensors, can be processed by a neuromorphic computing system. Our model encodes temporal photon distributions in the form of spike trains and employs a fully connected spiking neural network to regress the total deposited energy, as well as the position and spatial distribution of the light emissions within the sensitive material. The model is able to estimate the aforementioned observables in both single task and multi-tasks scenarios, obtaining consistent results in both settings. The extracted primitives offer valuable topological information about the shower development in the material, achieved without requiring a segmentation of the active medium. A potential nanophotonic implementation using III-V semiconductor nanowires is discussed.
Speakers: Dr Alessandro Breccia (University of Padova), Alessandro Breccia -
12:25
Discussion 5m
-
12:30
Hadron Identification Prospects With Granular Calorimeters 25m
In this work we consider the problem of determining the identity of hadrons at high energies based on the topology of their energy depositions in dense matter, along with the time of the interactions. Using GEANT4 simulations of a homogeneous lead tungstate calorimeter with high transverse and longitudinal segmentation, we investigated the discrimination of protons, positive pions, and positive kaons at 100 GeV. The analysis focuses on the impact of calorimeter granularity by progressively merging detector cells and extracting features like energy deposition patterns andtiming information. Two machine learning approaches, XGBoost and fully connected deep neural networks, were employed to assess the classification performance across particle pairs. The results indicate that fine segmentation improves particle discrimination, with higher granularity yielding more detailed characterization of energy showers. Additionally, the results highlight the importance of shower radius, energy fractions, and timing variables in distinguishing particle types. The XGBoost model demonstrated computational efficiency and interpretability advantages over deep learning for tabular data structures, while achieving similar classification performance. This motivates further work required to combine high- and low-level feature analysis, e.g., using convolutional and graph-based neural networks, and extending the study to a broader range of particle energies and types.
Speaker: Dr Abhishek (National Institute of Science Education and Research, India) -
12:55
Discussion 5m
-
11:30
-
13:00
→
14:00
Lunch at OAC (included in the fee) 1h
-
14:00
→
15:30
Free time
-
15:30
→
16:00
Coffee break included in the fee) 30m
-
16:00
→
17:30
Methods and toolsConvener: Lisa Kusch (TU Eindhoven)
-
16:00
Differentiable Programming in the Scikit-HEP Ecosystem 25m
Using tooling from the Scikit-HEP ecosystem we implement differentiable analysis pipelines for representative HEP analysis use cases and provide complimentary examples to the IRIS-HEP Analysis Grand Challenge. This presentation details the process and related development work and covers the example workflows that benefit from gradient-based optimization, compared to bespoke hand optimization, the challenges that were faced during the process, and the approaches used to address these challenges. We also provide context on future work in this area as well as provide recommendations for broader engagement of the field.
Speaker: Mohamed Aly (Princeton University (US)) -
16:25
Discussion 5m
-
16:30
Differentiable Geant4: Incorporating Multiple Coulomb Scattering for Detector Optimization 25m
Applying automatic differentiation (AD) to particle simulations such as Geant4 opens the possibility of addressing optimization tasks in high energy physics, such as guiding detector design and parameter fitting, with powerful gradient-based optimization methods. In this talk, we refine our previous work on differentiable simulation with Geant by incorporating multiple coulomb scattering into the physics engine of the simulation. The introduction of multiple scattering adds layers of complexity: discontinuities induced by conditional statements and stochastic behavior become even more pronounced, posing significant challenges for computing reliable unbiased derivatives with reasonable variance. These findings help build towards realistic optimizations of detectors with complete electromagnetic physics in Geant4.
Speaker: Jeffrey Krupa (SLAC) -
16:55
Discussion 5m
-
17:00
Surrogate models for faster automated design 25m
Historically driven by expert knowledge and intuition, experiment design is nowadays (partially) automated by software able to simulate and optimize the properties of complex setups. Beyond tinkering with some parameters, current tools can navigate a vast space of configurations. Gravitational wave detectors, the focus of this work, are a good example, as they can be encoded in a two-dimensional lattice of optical elements. By optimizing the position and properties of the elements, one can find highly sensitive, often counterintuitive, designs. This approach, while powerful, is nonetheless limited by the computational cost of the simulations. To overcome this bottleneck, we developed neural models that emulate the behavior of the systems, providing solutions much faster than classical simulators. In my presentation I will show the advantages and disadvantages of differentiable learned simulators vs physics-based simulators.
Speaker: Carlos Ruiz Gonzalez -
17:25
Discussion 5m
-
16:00
-
17:30
→
18:00
Break with no coffee 30m
-
18:00
→
19:00
Applications in Astro-HEP and Neutrino PhysicsConveners: Christian Glaser (Uppsala University), Dr Christian Haack (ECAP, FAU Erlangen)
-
18:00
Array Optimization for the Tau Air-Shower Mountain-Based Observatory 25m
Since its completion more than a decade ago, IceCube has discovered the diffuse astrophysical neutrino flux and begun to identify galactic and extragalactic neutrino emission. Despite this initial success, there are still opportunities in neutrino astronomy. In particular, understanding the diffuse flux's high-energy behavior and tau neutrino fraction are of interest. The Tau Air-Shower, Mountain-Based Observatory (TAMBO) will address this by enabling a high-purity tau neutrino signal in the energy range between 1 PeV and 100 PeV. TAMBO consists of an array of particle detectors arranged on one side of a deep canyon. These panels would detect charged-tau-lepton-induced air showers arising from tau neutrino interactions within the other side of the canyon. To maximize TAMBO's physics impact, the detector footprint should undergo optimization of angular resolution, energy resolution, and event rate. In this contribution, I will discuss progress towards optimizing the detector geometry using surrogate models of the simulation.
Speaker: Jeffrey Lazar -
18:25
Discussion 5m
-
18:30
Advancing Detector Calibration and Event Reconstruction in Water Cherenkov Detectors through Differentiable Simulation 25m
Next-generation monolithic Water Cherenkov detectors aim to probe fundamental questions in neutrino physics. These measurements demand unprecedented precision in detector calibration and event reconstruction, pushing beyond the capabilities of traditional techniques. We present a novel framework for differentiable simulation of Water Cherenkov detectors that enables end-to-end optimization through gradient-based methods. By leveraging JAX's automatic differentiation and implementing a grid-based acceleration system, our framework achieves millisecond-scale simulation times - four orders of magnitude faster than traditional approaches. The framework can incorporate neural network surrogates for unknown physical phenomena while maintaining interpretability throughout the simulation chain. As a demonstration, we employ a neural network to model differentiable photon generation probability distributions. Our modular architecture extends to various Water Cherenkov detectors, representing a significant step toward addressing systematic limitations in future neutrino experiments through differentiable programming techniques.
Speaker: Omar Alterkait -
18:55
Discussion 5m
-
18:00
-
19:00
→
20:00
Free time
-
20:00
→
21:30
Dinner at OAC (included in the fee) 1h 30m
-
08:00
→
09:00
-
-
08:00
→
09:00
Breakfast at OAC (only for people with OAC accommodation)
-
09:00
→
10:30
Applications in Particle PhysicsConvener: Prof. Pietro Vischia (Universidad de Oviedo and Instituto de Ciencias y Tecnologías Espaciales de Asturias (ICTEA))
-
09:00
End-to-End Optimal Detector Design with Mutual Information Surrogates 25m
We introduce a novel approach for end-to-end black-box optimization of high energy physics (HEP) detectors using local deep learning (DL) surrogates. These surrogates approximate a scalar objective function that encapsulates the complex interplay of particle-matter interactions and physics analysis goals. In addition to a standard reconstruction-based metric commonly used in the field, we investigate the information-theoretic metric of mutual information. Unlike traditional methods, mutual information is inherently task-agnostic, offering a broader optimization paradigm that is less constrained by predefined targets.
We demonstrate the effectiveness of our method in a realistic physics analysis scenario: optimizing the thicknesses of calorimeter detector layers based on simulated particle interactions. The surrogate model learns to approximate objective gradients, enabling efficient optimization with respect to energy resolution.
Our findings reveal three key insights: (1) end-to-end black-box optimization using local surrogates is a practical and compelling approach for detector design, providing direct optimization of detector parameters in alignment with physics analysis goals; (2) mutual information-based optimization yields design choices that closely match those from state-of-the-art physics-informed methods, indicating that these approaches operate near optimality and reinforcing their reliability in HEP detector design; and (3) information-theoretic methods provide a powerful, generalizable framework for optimizing scientific instruments. By reframing the optimization process through an information-theoretic lens rather than domain-specific heuristics, mutual information enables the exploration of new avenues for discovery beyond conventional approaches.
Speaker: Kinga Anna Wozniak (Universite de Geneve (CH)) -
09:25
Discussion 5m
-
09:30
Differentiable Programming for LHCb Tracking Reconstruction at 30 MHz 25m
The new fully software-based trigger of the LHCb experiment operates at a 30 MHz data rate and imposes tight constraints on GPU execution time. Tracking reconstruction algorithms in this first-level trigger must efficiently select detector hits, group them, build tracklets, account for the LHCb magnetic field, extrapolate and fit trajectories, and select the best track candidates to make a decision that reduces the 4 TB/s data rate by a factor of 30. One of the main challenges of these algorithms is the reduction of “ghost” tracks—fake combinations arising from detector noise or reconstruction ambiguities. A dedicated neural network architecture, designed to operate at the high LHC data rate, has been developed, achieving ghost rates below 20%. The techniques used in this work can be adapted for the reconstruction of other detector objects or for tracking reconstruction in other LHC experiments.
Speakers: Arantza De Oyanguren Campos (Univ. of Valencia and CSIC (ES)), Arantza Oyanguren (IFIC - Valencia), Jiahui Zhuo (Univ. of Valencia and CSIC (ES)) -
09:55
Discussion 5m
-
10:00
Differentiable modeling for calorimeter simulation using diffusion models 25m
The design of calorimeters presents a complex challenge due to the large number of design parameters and the stochastic nature of physical processes involved. In high-dimensional optimization, gradient information is essential for efficient design. While first-principle based simulations like GEANT4 are widely used, their stochastic nature makes them non-differentiable, posing challenges in gradient-based optimization. To address this, we propose a machine learning-based approach where we train a conditional diffusion denoising probabilistic model (CDDPM) as a differentiable surrogate for these simulations. The CDDPM not only predicts particle showers based on different particle types and incoming energy levels but also conditions on different detector design variables. Furthermore, we explore post-training adaptation techniques, such as adapter-based fine-tuning, to efficiently specialize the model for new calorimeter conditions without requiring full retraining. This allows for flexible optimization across different calorimeter configurations while maintaining computational efficiency. We evaluate the predictive accuracy of the model and assess its gradient output to demonstrate its potential for the future detectors design and optimization.
Speaker: Xuan Tung Nguyen (INFN and RPTU) -
10:25
Discussion 5m
-
09:00
-
10:30
→
11:00
Coffee break (included in the fee) 30m
-
11:00
→
13:00
Methods and toolsConvener: Lisa Kusch (TU Eindhoven)
-
11:00
Optimizing Jacobian and Hessian matrices with compiler-based analyses in Clad 25m
In many scientific computations the Jacobian and Hessian matrices are an important way to reason about underlying physical processes. Depending on the nature of the process, there exist many mathematical simplifications that do not need the whole matrices, which can be rather big and computationally expensive to compute. Automatic differentiation (AD) enables accurate and efficient computation but further optimizations such as computation of the diagonal of the Hessian matrix can lead to massive gains in performance. One example application in detector optimization problems where this is important is in the seeding step of a minimizer.
In this talk we present an AD implementation of an important technique for gaining computational efficiency for Jacobians and Hessians: sparsity patterns. A sparsity pattern defines which entries of a matrix are structurally nonzero, independent of specific numerical values. By focusing only on these nonzero entries unnecessary computation and memory usage can be avoided. For example, $f(x,y,z)= \{x^2+y, yz, sin(x)\}$ has Jacobian $J=\{2x, 1, 0; 0,z,y;cos(x),0,0\}$ this dense matrix can be simplified to just the 5 non-zero elements of interest. This technique is often crucial when working with large matrices.
In this talk, we will describe how Clad, a compiler-based source transformation tool, efficiently generates sparsity patterns. We will elaborate on how Clad forward- and reverse-mode differentiation enables it to effectively compute Jacobians and Hessians for arbitrary functions, and how sparsity patterns are obtained. We compare different approaches to sparsity pattern generation with Clad and contract their computational requirements and robustness. We discuss how this approach to AD differs from those commonly used in Machine Learning. Additionally, we compare Clad’s performance against other tools using topical and application driven benchmarks.
Speaker: Maksym Andriichuk (Princeton University (US)) -
11:25
Discussion 5m
-
11:30
Bringing Automatic Differentiation to CUDA with Compiler-Based Source Transformations 25m
GPUs have become increasingly popular for their ability to perform parallel operations efficiently, driving interest in General-Purpose GPU Programming. Scientific computing, in particular, stands to benefit greatly from these capabilities. However, parallel programming systems such as CUDA introduce challenges for code transformation tools due to their reliance on low-level hardware management primitives. These challenges make implementing automatic differentiation (AD) for parallel systems particularly complex.
CUDA is being widely adopted as an accelerator technology in many scientific algorithms from machine learning to physics simulations. Enabling AD for such codes builds a new valuable capability necessary for advancing scientific computing.
Clad is an LLVM/Clang plugin for automatic differentiation that performs source-to-source transformation by traversing the compiler's internal high-level data structures, and generates a function capable of computing derivatives of a given function at compile time. In this talk, we explore how we recently extended Clad to support GPU kernels and functions, as well as kernel launches and CUDA host functions. We will discuss the underlying techniques and real-world applications in scientific computing. Finally, we will examine current limitations and potential future directions for GPU-accelerated differentiation.
Speaker: Christina Koutsou (Princeton University (US)) -
11:55
Discussion 5m
-
12:00
Scaling RooFit's Automatic Differentiation Capabilities to CMS Combine 25m
RooFit's integration with the Clad infrastructure has introduced automatic differentiation (AD), leading to significant speedups and driving major improvements in its minimization framework. Besides, the AD integration has also inspired several optimizations and simplifications of key RooFit components in general. The AD framework in RooFit is designed to be extensible, providing all necessary primitives to efficiently traverse RooFit’s computation graphs.
CMS Combine, the primary statistical analysis tool in the CMS experiment, has played a pivotal role in groundbreaking discoveries, including the Higgs boson. Built on RooFit, CMS Combine is making AD a natural extension to improve performance and usability. Recognizing this potential, we have begun a collaborative effort to bridge gaps between the two frameworks with a core focus of enabling AD within CMS Combine through RooFit.
In this talk, we will present our progress, highlight the challenges encountered, and discuss the benefits and opportunities that AD integration brings to the CMS analysis workflow. By sharing insights from our ongoing work, we aim to engage the community in furthering AD adoption in high-energy physics.
Speaker: Vassil Vasilev (Princeton University (US)) -
12:25
Discussion 5m
-
12:30
Differentiable Computation with Awkward Array and JAX 25m
Modern scientific computing often involves nested and variable-length data structures, which pose challenges for automatic differentiation (AD). Awkward Array is a library for manipulating irregular data and its integration with JAX enables forward and reverse mode AD on irregular data. Several Python libraries, such as PyTorch, TensorFlow, and Zarr, offer variations of ragged data structures, but differentiating through their ragged types remains impossible or problematic. Awkward's JAX backend allows users to differentiate nested and variable-length data structures without compromising readability, ease of use, and performance.
This talk presents the current status of the Awkward Array's JAX backend, highlighting its implementation using JAX's pytrees, tracing mechanisms, and compatibility with JAX's AD system. We discuss the coverage of Awkward Array's automatic differentiation support, strategies for differentiable programming with nested data, and challenges encountered in extending JAX's API to support non-rectilinear array structures. Finally, we outline future development directions, including keeping up with JAX's evolving AD ecosystem, improved interoperability with ML frameworks, and potential applications in physics and beyond.Speaker: Saransh Chopra (Princeton University (US)) -
12:55
Discussion 5m
-
11:00
-
13:00
→
14:00
Lunch at OAC (included in the fee) 1h
-
14:00
→
15:30
Free time
-
15:30
→
16:00
Coffee break (included in the fee) 30m
-
16:00
→
20:00
Free time
-
20:00
→
21:00
Dinner at OAC (included in the fee) 1h
-
08:00
→
09:00
-
-
08:00
→
09:00
Breakfast at OAC (only for people with OAC accommodation)
-
09:00
→
10:00
Methods and tools
-
09:00
Evaluating Two-Sample Tests for Validating Generators in Precision Sciences 25m
Deep generative models have become powerful tools for alleviating the computational burden of traditional Monte Carlo generators in producing high-dimensional synthetic data. However, validating these models remains challenging, especially in scientific domains requiring high precision, such as particle physics. Two-sample hypothesis testing offers a principled framework to address this task. We propose a robust methodology to assess the performance and computational efficiency of various metrics for two-sample testing, with a focus on high-dimensional datasets. Our study examines tests based on univariate integral probability measures, namely the sliced Wasserstein distance, the mean of the Kolmogorov-Smirnov statistics, and the sliced Kolmogorov-Smirnov statistic. Additionally, we consider the unbiased Fréchet Gaussian Distance and the Maximum Mean Discrepancy. Finally, we include the New Physics Learning Machine, an efficient classifier-based test leveraging kernel methods. Experiments on both synthetic and realistic data show that one-dimensional projection-based tests demonstrate good sensitivity with a low computational cost. In contrast, the classifier-based test offers higher sensitivity at the expense of greater computational demands.
This analysis provides valuable guidance for selecting the appropriate approach—whether prioritizing efficiency or accuracy. More broadly, our methodology provides a standardized and efficient framework for model comparison and serves as a benchmark for evaluating other two-sample tests.Speaker: Samuele Grossi (Università degli studi di Genova & INFN sezione di Genova) -
09:25
Discussion 5m
-
09:30
Multiscale Inference of Structural Mechanics in Physical Systems 25m
This presentation will describe a method to discover the governing equations in physical systems with multiple regimes and lengthscales, using minimum entropy criteria to optimize results. The historically challenging problem of turbulent flow is used as an example, infamous for its half-ordered, half-chaotic behavior across several orders of magnitude. Exact solutions to the Navier-Stokes equations are not known to exist, and the resolution to this problem remains the subject of a Clay Millenium Prize. Accordingly, various approximations have been developed to describe turbulent regimes, including the Reynolds-Averaged Navier-Stokes (RANS) equations that separate velocity and pressure quantities into constant and stochastic terms. However, the RANS equations are nonoptimal and can be improved using information-theoretic techniques from ODE. Two components are used to analyze this problem. First is the observation of invariants, symmetries, and conserved quantities. Invariants are quantities that remain constant when subject to symmetry transformations, and conserved quantities are properties of dynamic systems that remain constant over time. The second component is the Minimum Description Length (MDL) criterion, which provides a mathematically rigorous way to identify the most accurate equations to describe a given dataset. Using a Bayesian selection process, the search space of possible governing equations is navigated to find the optimal expressions for fluid flow. After this step, the MDL criterion is applied again at a larger lengthscale to partition the flow field into distinct regimes and generate higher-level transfer equations. The end result is a more accurate version of the RANS decomposition grounded in information theory, which we call a Kolmogorov decomposition. While the specific fluid mechanics example has a wide range of applications, from propulsion design to weather prediction and oceanography, the mathematical techniques discussed in this presentation are domain-agnostic and can apply to all areas of physics.
Speaker: Stephen Casey (University of Miami) -
09:55
Discussion 5m
-
09:00
-
10:00
→
10:30
Data challenge!!! 30mSpeaker: Stephen Casey (University of Miami)
-
10:30
→
11:00
Coffee break (included in the fee) 30m
-
11:00
→
13:00
Applications in Muon Tomography
-
11:00
Imaging Techniques in Muon Tomography 25m
Scattering muon tomography leverages the multiple Coulomb scattering of cosmic-ray muons to image the internal structure of dense or shielded objects. Unlike transmission-based methods that rely on muon attenuation, scattering tomography measures angular deviations to infer the presence and composition of high-Z materials with high sensitivity. This presentation provides an overview of key imaging approaches used in scattering muon tomography, including point-of-closest-approach (PoCA), statistical reconstruction techniques like maximum likelihood and Bayesian inference, and recent developments in machine learning-assisted image reconstruction. We discuss the trade-offs in spatial resolution, detection efficiency, and computational complexity across these methods, with examples drawn from applications. Particular attention is given to how algorithmic choices and detector geometry influence imaging performance in real-world environments.
Speaker: Konstantin Borozdin -
11:25
Discussion 5m
-
11:30
Deep Learning for Muographic Image Upsampling: Improvements and Experimental Data Validation 25m
In the civil engineering industry, there is an increasing demand for innovative non-destructive evaluation methods. Muography is an emerging non-invasive technique that constructs three-dimensional density maps by detecting the interactions of naturally occurring cosmic-ray muons within the scanned volume. While muons can penetrate deep into structures, their low flux results in long acquisition times for high-resolution imaging. Recent work has demonstrated that a conditional Wasserstein Generative Adversarial Network with gradient penalty (cWGAN-GP) can enhance features and reduce noise variations in low-sampled muography data, significantly reducing the time required for detailed imaging. Additionally, segmentation models have shown strong capabilities in identifying structural features while mitigating smearing effects caused by the inverse imaging problem.
Ongoing research focuses on validating these models with experimental muography data to assess their robustness in practical scenarios. Furthermore, conventional convolutional architectures are limited in their ability to capture long-range spatial dependencies, potentially affecting feature detection. To address this, we are investigating models with increased context size, by incorporating 3D processing and attention mechanisms. This work ultimately aims to enhance the feasibility of muographic imaging of reinforced concrete, making it more attractive for widescale industry adoption.Speaker: William O’Donnell -
11:55
Discussion 5m
-
12:00
From Light to Muons: Towards a Unified Framework for Physics-based 3D Scene Reconstruction 25m
Inverse problems like magnetic resonance imaging, computer tomography, optical inverse rendering or muon tomography, amongst others, occur in a vast range of scientific, medical and security applications and are usually solved with highly specific algorithms depending on the task.
Approaching these problems from a physical perspective and reformulating them as a function of particle interactions, enables 3D scene reconstruction in a physically consistent manner across different types of electromagnetic radiation and particles.
Recent developments in differentiable volumetric rendering and optical optimization techniques, such as Neural Radiance Fields, Gaussian Splatting and Scene Representations Networks (SRN), have been used to demonstrate the feasibility of jointly estimating unknown geometry and material parameters of a 3D scene.
Some works also show the feasibility of modeling refraction and multiple scattering of light using differentiable optimization.In this work, we approach the formulation of a physically based 3D reconstruction method for the visible light spectrum, serving as a representative case to demonstrate the applicability of generalized and physics-based 3D scene reconstruction.
By directly incorporating these interactions into a differentiable pipeline captured by a parameterized observer, we decouple the optimization procedure from both, the specific type of interaction and the capture mechanism.
We perform a first experimental validation of our method using simulated and experimental optical scans from different sensing devices.
Lastly, we explore the inter-domain capability of the new reconstruction method to other inverse problems, including muon tomography imaging.Speaker: Felix Sattler (Detusches Zentrum für Luft- und Raumfahrt e.V. (German Aerospace Center)) -
12:25
Discussion 5m
-
12:30
Gradient-descent-based reconstruction for muon tomography based on automatic differentiation in PyTorch 25m
Muon scattering tomography is a well-established, non-invasive imaging technique using cosmic-ray muons.
Simple algorithms, such as PoCA (Point of Closest Approach), are often utilized to reconstruct the volume of interest from the observed muon tracks.
However, it is preferable to apply more advanced reconstruction algorithms to efficiently use the sparse statistics available.
One approach is to formulate the reconstruction task as a likelihood-based problem, where the material properties of the reconstruction volume are treated as an optimization parameter.In this contribution, we present a reconstruction method based on directly maximizing the underlying likelihood using automatic differentiation within the PyTorch framework.
We will introduce the general idea of this approach, and evaluate its advantages over conventional reconstruction methods.
Furthermore, first reconstruction results for different scenarios will be presented, and the potential that this approach inherently provides will be discussed.Speaker: Jean-Marco Alameddine -
12:55
Discussion 5m
-
11:00
-
13:00
→
14:00
Lunch at OAC (included in the fee) 1h
-
14:00
→
14:30
Common activities and tasks with calls for collaborators 30mSpeaker: Dr Pietro Vischia (Universidad de Oviedo and Instituto de Ciencias y Tecnologías Espaciales de Asturias (ICTEA))
-
14:30
→
15:30
Free time
-
15:30
→
16:00
Coffee break (included in the fee) 30m
-
16:00
→
17:30
Applications in Astro-HEP and Neutrino PhysicsConveners: Christian Glaser (Uppsala University), Dr Christian Haack (ECAP, FAU Erlangen)
-
16:00
Differentiable detector simulation of a liquid argon time projection chamber using JAX 25m
Differentiability in detector simulation can enable efficient and effective detector optimisation. We are developing an AD-enabled detector simulation of a liquid argon time projection chamber to facilitate simultaneous detector calibration through gradient-based optimisation. This approach allows us to account for the correlations of the detector modeling parameters comprehensively and avoid biases introduced by segmented measurements. The implementation in JAX enhances the computational performances, demonstrating the efficiency of our optimisation framework. We will present the detector calibration using real data(-like) samples and discuss practical considerations for deploying this method in experimental settings. This differentiable detector simulation also has the potential to be applied to uncertainty quantification, inverse problem solving, and detector design optimisation.
Speaker: Yifan Chen (SLAC National Accelerator Laboratory (US)) -
16:25
Discussion 5m
-
16:30
Optimization pipeline for in-ice radio neutrino detectors 25m
In-ice radio detection of neutrinos is a rapidly growing field and a promising technique for discovering the predicted but yet unobserved ultra-high-energy astrophysical neutrino flux. With the ongoing construction of the Radio Neutrino Observatory in Greenland (RNO-G) and the planned radio extension of IceCube-Gen2, we have a unique opportunity to improve the detector design now and accelerate the experimental outcome in the field for the coming decades. In this contribution, we present an end-to-end in-ice radio neutrino simulation, detection, and reconstruction pipeline using generative machine learning models and differentiable programming. We demonstrate how this framework can be used to optimize the antenna layout of detectors to achieve the best possible reconstruction resolution of neutrino parameters.
Speaker: Martin Langgård Ravn (Uppsala University) -
16:55
Discussion 5m
-
17:00
Optimization of the Future P-ONE Neutrino Telescope 25m
P-ONE is a planned cubic-kilometer-scale neutrino detector in the Pacific ocean. It will measure high-energy astrophysical neutrinos to help characterize the nature of astrophysical accelerators. Using existing deep-sea infrastructure provided by Ocean Networks Canada (ONC), P-ONE will instrument the ocean with optical modules - which host PMTs as well as readout electronics - deployed on several vertical cables of about 1km length. While the first prototype cable is currently being assembled, the detector geometry of the final instrument is not yet fixed.
In this talk, I will present the progress of optimizing the detector design using ML-based surrogate models, which replace computationally expensive MC simulations, and, by providing gradients, allow efficient computation of the Fisher Information Matrix as an optimization target.Speaker: Dr Christian Haack (ECAP, FAU Erlangen) -
17:25
Discussion 5m
-
16:00
-
17:30
→
18:00
Break with no coffee 30m
-
18:00
→
19:00
Keynote sessionConvener: Dr Pietro Vischia (Universidad de Oviedo and Instituto de Ciencias y Tecnologías Espaciales de Asturias (ICTEA))
-
18:00
Automatic Differentiation by Source Transformation 50m
After a detailled introduction on AD, we focus on Source-Transformation reverse AD, a remarkably efficient way to compute gradients. One cornerstone of reverse AD is data-flow reversal, the process of restoring memory states of a computation in reverse order.
While this is by no means cheap, we will present the most efficient storage/recomputation trade-offs that permit data-flow reversal on computation-intensive applications. AD is an active research field and we will conclude with our guess of the most important future challenges.
Speaker: Laurent Hascoet (INRIA) -
18:50
Discussion 10m
-
18:00
-
19:00
→
20:00
Free time
-
20:00
→
22:00
Gala dinner and Cretan dances (at OAC, included in the fee) 2h
-
08:00
→
09:00
-
-
08:00
→
09:00
Breakfast at OAC (only for people with OAC accommodation)
-
09:00
→
10:30
Applications in Astro-HEP and Neutrino PhysicsConveners: Christian Glaser (Uppsala University), Dr Christian Haack (ECAP, FAU Erlangen)
-
09:00
A Differentiable Interferometer Simulator for the Computational Design of Gravitational Wave Dectectors 25m
Recent advances in optimization techniques have opened up a promising path towards computationally exploring the vast design space of new gravitational wave detectors. Formulating a highly expressive, continuous search space of potential topologies, defining a clear objective function and evaluating detector candidates with an interferometer simulator allow for computational methods to discover novel and unconventional detector blueprints that compete with designs based on human ingenuity. One current bottleneck of such optimizations is the numerical gradient approximation which makes it necessary to run the simulator multiple times per evaluation. To address this bottleneck, we present a new differentiable frequency domain interferometer simulator implemented in Python using the JAX framework. Our implementation closely follows the established Finesse simulator and offers functionality to simulate plane waves in quasi-static, user-specified setup configurations including quantum noise calculations and optomechanical effects. JAX’s GPU support and just-in-time compilation ensure fast runtimes, while its automatic differentiation feature enables gradient-based optimizations that can easily support the large-scale digital discovery of novel gravitational wave detectors.
Speaker: Jonathan Klimesch -
09:25
Discussion 5m
-
09:30
Image reconstruction with proton computed tomography 25m
Objective:
Proton therapy is an emerging approach in cancer treatment. A key challenge is improving the accuracy of Bragg-peak position calculations, which requires more precise relative stopping power (RSP) measurements. Proton computed tomography (pCT) is a promising technique, as it enables imaging under conditions identical to treatment by using the same irradiation device and hadron beam. Our research focuses on developing an advanced image reconstruction algorithm to maximize the performance of pCT systems.Approach:
A novel image reconstruction algorithm was developed to reconstruct pCT images using measurements of deposited energy, position, and direction of individual protons. The flexibility of an iterative reconstruction method was leveraged to accurately model proton trajectories. Monte Carlo (MC) simulations of CTP528 and CTP404 phantoms were used to evaluate the accuracy of the proposed approach.Main Results:
For the first time, the iterative Richardson–Lucy algorithm was successfully applied to pCT image reconstruction. An averaged probability density-based approach was introduced for system matrix generation, effectively incorporating uncertainties in proton paths within the patient. Under an idealized detector setup, the method achieved a spatial resolution of 4.34 lp/cm and an average RSP uncertainty of 0.7%. This approach offers a promising balance between accuracy and computational efficiency, with potential for further refinements.Significance:
This study represents the first application of the Richardson–Lucy iterative algorithm for pCT image reconstruction, demonstrating its viability for enhancing pCT performance.Speaker: Zsofia Jolesz (Wigner Research Centre for Physics) -
09:55
Discussion 5m
-
10:00
Point-spread function design in optical microscopy by end to end optimization 25m
The point spread function (PSF) of an imaging system is the system's response to a point source. To encode additional information in microscopy images, we employ PSF engineering – namely, a physical modification of the standard PSF of the microscope by additional optical elements that perform wavefront shaping. In this talk I will describe how this method enables unprecedented capabilities in localization microscopy; specific applications include dense fluorescent molecule fitting for 3D super-resolution microscopy, multicolor imaging from grayscale data, volumetric multi-particle tracking/imaging, dynamic surface profiling, and high-throughput in-flow colocalization in live cells. I will specifically describe how deep-learning can help us design optimal PSFs for various tasks by joint optimization of the optical encoder + algorithmic (neural net based) decoder. Recent results on additive-manufacturing of highly precise optics will be discussed as well.
Speaker: Yoav Shechtman -
10:25
Discussion 5m
-
09:00
-
10:30
→
11:00
Coffee break (included in the fee) 30m
-
11:00
→
13:30
Applications in Particle PhysicsConvener: Dr Pietro Vischia (Universidad de Oviedo and Instituto de Ciencias y Tecnologías Espaciales de Asturias (ICTEA))
-
11:00
A Multiple Readout Ultra-High Segmentation Detector Concept For Future Colliders 25m
The Meadusa (Multiple Readout Ultra-High Segmentation) Detector Concept is an innovative approach to address the unique challenges and opportunities presented by the future lepton colliders and beyond. The Meadusa concept prioritizes ultra-high segmentation and multi-modal data acquisition to achieve ultra-high spatial, timing and event structure precision in particle detection. By combining a diverse array of active materials and readout technologies, Meadusa design is intended to be optimized for specific single particle and jet energy resolution, single particle identification and flavour tagging capabilities.
The Meadusa concept is based on bringing together multiple, highly granular active elements with complementary sensitivities to different particle species in a single detector layer. The Meadusa detector is expected to embed cutting edge technologies and recent findings in optical, solid-state and gaseous detectors. The conceptual development has started as an initial design and is expected to evolve with the advancement of relevant technologies and following the performance estimation and optimization with advanced machine learning and artificial intelligence techniques and experimental validation.
Here we report on the foundations of the concept, the description of the initial design and the preliminary performance parameters under various experimental conditions and using novel machine learning techniques.
Speaker: Burak Bilki (Beykent University (TR), The University of Iowa (US)) -
11:25
Discussion 5m
-
11:30
The Calibr-A-Ton: a novel method for calorimeter energy calibration 25m
The energy calibration of calorimeters at collider experiments, such as the ones at the CERN Large Hadron Collider, is crucial for achieving the experiment’s physics objectives. Standard calibration approaches have limitations which become more pronounced as detector granularity increases. In this paper we propose a novel calibration procedure to simultaneously calibrate individual detector cells belonging to a particle shower, by targeting a well-controlled energy reference. The method bypasses some of the difficulties that exist in more standard approaches. It is implemented using differentiable programming. In this paper, simulated energy deposits in the electromagnetic section of a high-granularity calorimeter are used to study the method and demonstrate its performance. It is shown that the method is able to correct for biases in the energy response
Speaker: Shamik Ghosh (Centre National de la Recherche Scientifique (FR)) -
11:55
Discussion 5m
-
12:00
Constrained Optimization of Charged Particle Tracking with Multi-Agent Reinforcement Learning 25m
Detector optimisation requires reconstruction paradigms to be adaptable to changing geometries during the optimisation process, as well as to be differentiable if they should become part of a gradient-based optimisation pipeline. Reinforcement learning recently demonstrated immense success in modelling complex physics-driven systems, providing end-to-end trainable solutions by interacting with a simulated or real environment, maximizing a scalar reward signal. In this talk, we present a novel end-to-end optimizable multi-agent reinforcement learning approach with assignment constraints for reconstructing particle tracks in pixelated particle detectors, serving as a heuristic for a multidimensional assignment problem. We further highlight necessary components and modifications for efficient and stable optimisation under the high-combinatorial complexity of particle tracking.
Using simulated data, generated for a particle detector designed for proton imaging, we empirically demonstrate the effectiveness of our approach compared to multiple baseline algorithms. We provide additional insights into the optimisation landscape, highlighting the importance of the proposed architectural components for collaborative optimisation of particle tracks.
Speaker: Tobias Kortus (University of Kaiserslautern-Landau (RPTU)) -
12:25
Discussion 5m
-
12:30
Towards end-to-end optimization of a Muon Collider Calorimeter 25m
Setup design is a critical aspect of experiment development, particularly in high-energy physics, where decisions influence research trajectories for decades. Within the MODE Collaboration, we aim to generalize Machine Learning methodologies to construct a fully differentiable pipeline for optimizing the geometry of the Muon Collider Electromagnetic Calorimeter.
Our approach leverages Denoising Diffusion Probabilistic Models (DDPMs) for signal generation and Graph Neural Networks (GNNs) for photon reconstruction in the presence of Beam-Induced Background from muon decays. Through automatic differentiation, we integrate these components into a unified framework that enables end-to-end optimization of calorimeter configurations. We present the structure of this pipeline, discuss key generation and reconstruction techniques, and showcase the latest results on proposed geometries.
Speaker: Federico Nardi (Universita e INFN, Padova (IT) - LPC Clermont) -
12:55
Discussion 5m
-
13:00
Reinforcement Learning for Physics Instrument Design 25m
We present a case for the use of Reinforcement Learning (RL) for the design of physics instruments as an alternative to gradient-based instrument-optimization methods in arXiv:2412.10237. As context, we first reflect on our previous work optimizing the Muon Shield following the experiment’s approval—an effort successfully tackled using classical approaches such as Bayesian Optimization, supported by a complex but easy-to-use computing infrastructure. While effective, this earlier work highlighted the limitations of conventional methods in terms of design flexibility and scalability. Then the applicability of RL is demonstrated using two empirical studies. One is longitudinal segmentation of calorimeters and the second is both transverse segmentation as well as longitudinal placement of trackers in a spectrometer. Based on these experiments, we propose an alternative approach that offers unique advantages over differentiable programming and surrogate-based differentiable design optimization methods. First, RL algorithms possess inherent exploratory capabilities, which help mitigate the risk of convergence to local optima. Second, this approach eliminates the necessity of constraining the design to a predefined detector model with fixed parameters. Instead, it allows for the flexible placement of a variable number of detector components and facilitates discrete decision-making. We then discuss the road map of how this idea can be extended into designing very complex instruments. The presented study sets the stage for a novel framework in physics instrument design, offering a scalable and efficient framework that can be pivotal for future projects such as the Future Circular Collider (FCC), where highly optimized detectors are essential for exploring physics at unprecedented energy scales.
Speaker: Shah Rukh Qasim (University of Zurich (CH)) -
13:25
Discussion 5m
-
11:00
-
13:00
→
14:00
Lunch at OAC (included in the fee) 1h
-
14:00
→
15:30
Free time
-
15:30
→
16:00
Coffee break (included in the fee) 30m
-
16:00
→
17:00
Applications in Astro-HEP and Neutrino PhysicsConveners: Christian Glaser (Uppsala University), Dr Christian Haack (ECAP, FAU Erlangen)
-
16:00
Experimental validation of a DDPG-based approach for Fabry-Perot optical cavity locking control 25m
This work highlights the experimental framework employed to implement and validate Deep Deterministic Policy Gradient (DDPG) for controlling a Fabry-Perot (FP) optical cavity, a key component in interferometric gravitational-wave detectors. An initial focus is placed on the real-world setup characterisation, where high finesse values and mirror velocities introduce significant non-linearities.
DDPG, a model-free, off-policy algorithm that efficiently handles continuous action spaces, is used to address these challenges. It integrates actor-critic networks with experience replay and slow-updating target networks, to achieve stable learning. In addition, we apply input and output normalization which mitigates issues arising from diverse physical units and variable input scales, facilitating robust policy updates and portability without exhaustive manual tuning.
To transition from simulation to the physical system, the FP cavity is first accurately modelled in a high-fidelity simulator. Strategies, such as accounting for delays and noise sources, are incorporated to minimize the reality gap to address sim-to-real transfer. The trained DDPG agent is then deployed on the hardware, demonstrating how deterministic policy gradients can adapt to real-time feedback, latency, and environmental uncertainties. This integration of simulation, DDPG-based control, and experimental measurement represents a significant step toward reliable and autonomous optical cavity locking, paving the way for advanced control in gravitational-wave detection and other high-precision photonic applications.Speaker: Mr Andrea Svizzeretto (University of Perugia) -
16:25
Discussion 5m
-
16:30
Artificial Scientific Discovery for New Quantum Experiments 25m
The integration of artificial intelligence (AI) into scientific research is reshaping discovery across disciplines—from protein folding and materials design to theorem proving. These advances mark AI’s evolution from a computational tool to an active participant in scientific exploration.
Quantum physics represents a particularly promising frontier for AI-driven discovery. As we push deeper into the quantum realm, the combinatorial design space of possible experiments expands rapidly. This, combined with the counterintuitive nature of quantum mechanics, often surpasses human intuition. The resulting difficulty in exploring this complex space poses a major challenge to both fundamental research and practical quantum technologies.
Here, we demonstrate how AI can help address these challenges to discover new quantum setups. We introduce two highly efficient digital discovery frameworks: PyTheus and esQueranto. PyTheus generates interpretable experimental designs for complex quantum tasks, often producing setups that human researchers can readily understand and implement. In contrast, esQueranto is optimized for practical applications and can efficiently explore real-world experimental configurations. We hope our approach will accelerate progress in quantum optics and inspire new directions in quantum hardware and technology.
Speaker: Dr Xuemei Gu (Friedrich Schiller University Jena) -
16:55
Discussion 5m
-
16:00
-
17:30
→
19:30
Wine Tasting and Poster Session
-
17:30
A Comparative Analysis of Synthetic Medical X-Ray Image Generation: DALL-E vs. Stable Diffusion 2h
Medical imaging—including X-rays and MRI scans—is crucial for diagnostics and research. However, the development and training of AI diagnostic models are hindered by limited access to large, high-quality datasets due to privacy concerns, high costs, and data scarcity. Synthetic image generation via differentiable programming has emerged as an effective strategy to augment real datasets with diagnostically relevant, high-fidelity images. This approach utilizes gradient optimization to fine-tune image parameters, ensuring that synthetic outputs maintain the essential features of authentic medical images.
In this study, we compare two state-of-the-art generative AI models—DALL-E, a proprietary model developed by OpenAI, and Stable Diffusion, an open-source alternative—for their effectiveness in generating synthetic medical X-ray images. DALL-E is recognized for its ease of use, robust pre-trained capabilities, and high-resolution outputs, while Stable Diffusion provides extensive customization and fine-tuning options that may lead to enhanced performance in specific applications. We apply both models to diverse medical imaging datasets, including those related to COVID-19, tuberculosis, and other respiratory diseases, to significantly expand the size of available datasets.
We assess the impact of synthetic image augmentation by comparing the performance of AI models trained exclusively on real data with those trained on a combination of real and synthetic images. Our evaluation focuses on diagnostic accuracy, image quality, and overall reliability. The results highlight important trade-offs between accessibility, customization, and model performance, offering valuable insights into the practical application of synthetic image generation techniques for improving AI-assisted diagnostics in medical imaging.
Speaker: RUKSHAK KAPOOR (Thapar Institute of Engineering & Technology, Patiala (India)) -
17:30
Advancing Detector Calibration and Event Reconstruction in Water Cherenkov Detectors through Differentiable Simulation 2h
Next-generation monolithic Water Cherenkov detectors aim to probe fundamental questions in neutrino physics. These measurements demand unprecedented precision in detector calibration and event reconstruction, pushing beyond the capabilities of traditional techniques. We present a novel framework for differentiable simulation of Water Cherenkov detectors that enables end-to-end optimization through gradient-based methods. By leveraging JAX's automatic differentiation and implementing a grid-based acceleration system, our framework achieves millisecond-scale simulation times - four orders of magnitude faster than traditional approaches. The framework can incorporate neural network surrogates for unknown physical phenomena while maintaining interpretability throughout the simulation chain. As a demonstration, we employ a neural network to model differentiable photon generation probability distributions. Our modular architecture extends to various Water Cherenkov detectors, representing a significant step toward addressing systematic limitations in future neutrino experiments through differentiable programming techniques.
Speaker: Omar Alterkait -
17:30
Bias Reduction Using Expectation Maximization in the Optimization of an AI-Assisted Muon Tomography System 2h
Muon tomography is a powerful imaging technique that leverages cosmic-ray muons to probe the internal structure of large-scale objects. However, traditional reconstruction methods, such as the Point of Closest Approach (POCA), introduce significant bias, leading to suboptimal image quality and inaccurate material characterization. To address this issue, we propose an approach based on Expectation Maximization (EM), a probabilistic iterative method that refines the reconstruction by reducing bias in the inferred muon trajectories.
In this work, we present the implementation of an EM algorithm tailored for muon tomography and compare its performance against the POCA baseline. We analyze the improvements in reconstruction accuracy and discuss the impact of EM-based optimization in AI-assisted muon imaging systems. This approach has been integrated into the muograph package.
Speaker: Marta de la Puente Santos -
17:30
Design of an Imaging Air Cherenkov Telescope array layout with differential programming 2h
Current optimization of ground Cherenkov telescopes arrays relies on brute-force approaches based on large simulations requiring both high amount of storage and long computation time. To explore the full phase space of telescope positioning of a given array even more simulations would be required. To optimize any array layout, we explore the possibility of developing a differential program with surrogate models of IACT arrays based on high-level instrument response functions.
While the simulation time of a single telescope to a cosmic-ray event can be significantly reduced with its instrument response function or with generative models, it is not straight forward to model the array of telescope from a set of single telescope surrogate models as the array is a stereoscopic imaging system. The complexity increases as well if the telescopes in the array are of different types.
Additionally, the optimum of the array layout depends on the scientific use case. Previous array layout optimization were obtained by minimizing the sensitivity of the array, a metric that depends on several high-level parameters such as the trigger efficiency, the energy and angular resolution, as well as the background rejection capability. The variety of telescopes types in IACT arrays, such as in the Cherenkov Telescope Array Observatory (CTAO), not only extends the sensitive energy range but also allows for cross-calibration of the instruments. Therefore, the optimal array layout is not only which minimizes sensitivity but also reduces the systematic uncertainties.
We focus on the optimization of a telescope arrays based on the SST-1M and the MACE IACTs in Hanle, Ladakh India aiming at building a generic optimization pipeline for future ground-based cosmic-ray observatories.
Speaker: Cyril Alispach (Universite de Geneve (CH)) -
17:30
Design optimization of hadronic calorimeters for future colliders 2h
In modern particle detectors, calorimeters provide critical energy measurements of particles produced in high-energy collisions. The demanding requirements of next-generation collider experiments would benefit from a systematic approach to the optimization of calorimeter designs. The performance of calorimeters is primarily characterized by their energy resolution, parameterized by a stochastic term which reflects sampling fluctuations, and a constant term, accounting for calibration uncertainties and non-uniformities. These terms serve as figures of merit for detector performance, leading to improved reconstruction of physics objects.
This work focuses on optimizing the layer composition of hadronic calorimeters for the FCC detector concepts. Through detailed GEANT4-based simulations, and the use of lightweight full detector simulation tools (COCOA), we analyze the impact of varying passive and active material proportions and layer thickness distribution on energy resolution performance. Our methodology aims to isolate these contributions from other design factors, in order to develop a closed optimization framework that evaluates configurations against physics performance targets, while still addressing practical constraints.Speaker: Bruno Jorge De Matos Rodrigues (Laboratory of Instrumentation and Experimental Particle Physics (PT)) -
17:30
Discriminating Hadronic Showers with Deep Neural Networks in a High-Granularity Calorimeter 2h
The increasing importance of high-granularity calorimetry in particle physics origins from its ability to enhance event reconstruction and jet substructure analysis. In particular, the identification of hadronic decays within boosted jets and the application of particle flow techniques have demonstrated the advantages of fine spatial resolution in calorimeters. In this study, we investigate whether arbitrarily high granularity can also facilitate the classification of hadron-induced showers and aim to determine the granularity scale at which information on particle identity is extractable or lost. Using GEANT4, we simulate a 100 × 100 × 200 cells calorimeter composed of Lead Tungstate (PbWO₄), where each cell has dimensions of 3 mm × 3 mm × 6 mm. We analyse the discrimination of showers produced by protons, charged pions, and kaons based on the detailed topology of energy deposition. To achieve this, we used deep learning algorithms, specifically Deep Neural Networks, to classify the shower patterns and evaluate the impact of calorimeter granularity on discrimination power. Our preliminary results indicate significant potential for hadron identification through high-granularity calorimetry, which could improve particle identification in future high-energy physics experiments.
Speaker: Mr Abhishek (National Institute of Science Education and Research, Jatni, 752050, India) -
17:30
Hadron Identification Prospects With Granular Calorimeters 2h
In this work we consider the problem of determining the identity of hadrons at high energies based on the topology of their energy depositions in dense matter, along with the time of the interactions. Using GEANT4 simulations of a homogeneous lead tungstate calorimeter with high transverse and longitudinal segmentation, we investigated the discrimination of protons, positive pions, and positive kaons at 100 GeV. The analysis focuses on the impact of calorimeter granularity by progressively merging detector cells and extracting features like energy deposition patterns andtiming information. Two machine learning approaches, XGBoost and fully connected deep neural networks, were employed to assess the classification performance across particle pairs. The results indicate that fine segmentation improves particle discrimination, with higher granularity yielding more detailed characterization of energy showers. Additionally, the results highlight the importance of shower radius, energy fractions, and timing variables in distinguishing particle types. The XGBoost model demonstrated computational efficiency and interpretability advantages over deep learning for tabular data structures, while achieving similar classification performance. This motivates further work required to combine high- and low-level feature analysis, e.g., using convolutional and graph-based neural networks, and extending the study to a broader range of particle energies and types.
Speaker: Dr Abhishek (National Institute of Science Education and Research, India) -
17:30
Neuromorphic Readout for Hadron Calorimeters 2h
In this work we simulate hadrons impinging on a homogeneous lead-tungstate (PbWO4) calorimeter to investigate how the resulting light yield and its temporal structure, as detected by an array of light-sensitive sensors, can be processed by a neuromorphic computing system. Our model encodes temporal photon distributions in the form of spike trains and employs a fully connected spiking neural network to regress the total deposited energy, as well as the position and spatial distribution of the light emissions within the sensitive material. The model is able to estimate the aforementioned observables in both single task and multi-tasks scenarios, obtaining consistent results in both settings. The extracted primitives offer valuable topological information about the shower development in the material, achieved without requiring a segmentation of the active medium. A potential nanophotonic implementation using III-V semiconductor nanowires is discussed.
Speakers: Dr Alessandro Breccia (University of Padova), Alessandro Breccia -
17:30
Towards end-to-end optimization of a Muon Collider Calorimeter 2h
Setup design is a critical aspect of experiment development, particularly in high-energy physics, where decisions influence research trajectories for decades. Within the MODE Collaboration, we aim to generalize Machine Learning methodologies to construct a fully differentiable pipeline for optimizing the geometry of the Muon Collider Electromagnetic Calorimeter.
Our approach leverages Denoising Diffusion Probabilistic Models (DDPMs) for signal generation and Graph Neural Networks (GNNs) for photon reconstruction in the presence of Beam-Induced Background from muon decays. Through automatic differentiation, we integrate these components into a unified framework that enables end-to-end optimization of calorimeter configurations. We present the structure of this pipeline, discuss key generation and reconstruction techniques, and showcase the latest results on proposed geometries.
Speaker: Federico Nardi (Universita e INFN, Padova (IT) - LPC Clermont) -
17:30
Using End-to-End Optimized Summary Statistics to Improve IceCube's Measurement of the Galactic Neutrino Flux 2h
Characterizing the astrophysical neutrino flux with the IceCube Neutrino Observatory traditionally relies on a binned forward-folding likelihood approach. Insufficient Monte Carlo (MC) statistics in each bin limits the granularity and dimensionality of the binning scheme. We employ a neural network to optimize a summary statistic that serves as the input for data analysis, enabling the inclusion of additional observables without compromising statistical precision. Achieving end-to-end optimization of the summary statistic requires adapting the existing analysis pipeline to be fully differentiable, specifically by employing differentiable binned kernel density estimation (KDE), computing the test statistic using Fisher information, and incorporating data sampling techniques for neural network inputs. This work will detail the application of end-to-end optimized summary statistics in analyzing and characterizing the Galactic neutrino flux, achieving improved resolution for selected signal parameters and models.
Speaker: Oliver Janik -
17:30
Using Source Transformation Based Automatic Differentiation To Solve Inverse Problems 2h
Many scientific computations rely on Monte Carlo methods, which pose challenges for automatic differentiation (AD). The ability to infer underlying parameters from observed data using a Monte Carlo process is essential for improving simulations and optimizing models of physical processes, in computer graphics, and in physics-informed machine learning. AD enables solving inverse problems, and assists machine learning approaches to scientific applications where results from simulation codes capture domain specific information about physics constraints.
In this talk, we demonstrate solving an inverse problem in the area of computer graphics by applying the compiler-based source transformation tool, Clad, to a path-based ray tracing algorithm written in C++. We will discuss the challenges we faced while integrating Clad into this ray tracing application, including enhancements in STL support and user-defined object-oriented programming constructs. We will also evaluate Clad’s performance through benchmarks and compare it to other differentiation methods. Finally, we will use this application to illustrate Clad’s advantages in automating derivative computations for scientific and engineering problems.
Speaker: Petro Zarytskyi (Princeton University (US))
-
17:30
-
20:00
→
21:00
Dinner at OAC (included in the fee) 1h
-
08:00
→
09:00
-
-
07:45
→
08:45
Breakfast at OAC (only for people with OAC accommodation) 1h
-
08:45
→
18:45
Departure day 10h
-
07:45
→
08:45