This presentation will explore the intersection of neural networks and differential programming in addressing critical challenges within the maritime domain. The presentation will begin with an overview of key issues facing the sector, followed by an overview of research conducted at the DLR Institute for the Protection of Maritime Infrastructures where research using differentiable methods...
We study the application of a spiking neural network architecture for identifying charged particle trajectories via unsupervised learning of synaptic delays using a spike-time-dependent plasticity rule. In the considered model, the neurons receive time-encoded information on the position of particle hits in a tracking detector for a particle collider, modeled according to the geometry of the...
Detector optimisation requires reconstruction paradigms to be adaptable to changing geometries during the optimisation process, as well as to be differentiable if they should become part of a gradient-based optimisation pipeline. Reinforcement learning recently demonstrated immense success in modelling complex physics-driven systems, providing end-to-end trainable solutions by interacting with...
The new fully software-based trigger of the LHCb experiment operates at a 30 MHz data rate and imposes tight constraints on GPU execution time. Tracking reconstruction algorithms in this first-level trigger must efficiently select detector hits, group them, build tracklets, account for the LHCb magnetic field, extrapolate and fit trajectories, and select the best track candidates to make a...
Using tooling from the Scikit-HEP ecosystem we implement differentiable analysis pipelines for representative HEP analysis use cases and provide complimentary examples to the IRIS-HEP Analysis Grand Challenge. This presentation details the process and related development work and covers the example workflows that...
We introduce a novel approach for end-to-end black-box optimization of high energy physics (HEP) detectors using local deep learning (DL) surrogates. These surrogates approximate a scalar objective function that encapsulates the complex interplay of particle-matter interactions and physics analysis goals. In addition to a standard reconstruction-based metric commonly used in the field, we...
We present a case for the use of Reinforcement Learning (RL) for the design of physics instruments as an alternative to gradient-based instrument-optimization methods in arXiv:2412.10237. As context, we first reflect on our previous work optimizing the Muon Shield following the experiment’s approval—an effort successfully tackled using classical approaches such as Bayesian Optimization,...
Inverse problems like magnetic resonance imaging, computer tomography, optical inverse rendering or muon tomography, amongst others, occur in a vast range of scientific, medical and security applications and are usually solved with highly specific algorithms depending on the task.
Approaching these problems from a physical perspective and reformulating them as a function of particle...
RooFit's integration with the Clad infrastructure has introduced automatic differentiation (AD), leading to significant speedups and driving major improvements in its minimization framework. Besides, the AD integration has also inspired several optimizations and simplifications of key RooFit components in general. The AD framework in RooFit is designed to be extensible, providing all necessary...
GPUs have become increasingly popular for their ability to perform parallel operations efficiently, driving interest in General-Purpose GPU Programming. Scientific computing, in particular, stands to benefit greatly from these capabilities. However, parallel programming systems such as CUDA introduce challenges for code transformation tools due to their reliance on low-level hardware...
Deep generative models have become powerful tools for alleviating the computational burden of traditional Monte Carlo generators in producing high-dimensional synthetic data. However, validating these models remains challenging, especially in scientific domains requiring high precision, such as particle physics. Two-sample hypothesis testing offers a principled framework to address this task....
This presentation will describe a method to discover the governing equations in physical systems with multiple regimes and lengthscales, using minimum entropy criteria to optimize results. The historically challenging problem of turbulent flow is used as an example, infamous for its half-ordered, half-chaotic behavior across several orders of magnitude. Exact solutions to the Navier-Stokes...
Modern scientific computing often involves nested and variable-length data structures, which pose challenges for automatic differentiation (AD). Awkward Array is a library for manipulating irregular data and its integration with JAX enables forward and reverse mode AD on irregular data. Several Python libraries, such as PyTorch, TensorFlow, and Zarr, offer variations of ragged data structures,...
Muon tomography is a powerful imaging technique that leverages cosmic-ray muons to probe the internal structure of large-scale objects. However, traditional reconstruction methods, such as the Point of Closest Approach (POCA), introduce significant bias, leading to suboptimal image quality and inaccurate material characterization. To address this issue, we propose an approach based on...
P-ONE is a planned cubic-kilometer-scale neutrino detector in the Pacific ocean. It will measure high-energy astrophysical neutrinos to help characterize the nature of astrophysical accelerators. Using existing deep-sea infrastructure provided by Ocean Networks Canada (ONC), P-ONE will instrument the ocean with optical modules - which host PMTs as well as readout electronics - deployed on...
In-ice radio detection of neutrinos is a rapidly growing field and a promising technique for discovering the predicted but yet unobserved ultra-high-energy astrophysical neutrino flux. With the ongoing construction of the Radio Neutrino Observatory in Greenland (RNO-G) and the planned radio extension of IceCube-Gen2, we have a unique opportunity to improve the detector design now and...
Muon tomography is a powerful imaging technique that leverages cosmic-ray muons to probe the internal structure of large-scale objects. However, traditional reconstruction methods, such as the Point of Closest Approach (POCA), introduce significant bias, leading to suboptimal image quality and inaccurate material characterization. To address this issue, we propose an approach based on...
GPUs have become increasingly popular for their ability to perform parallel operations efficiently, driving interest in General-Purpose GPU Programming. Scientific computing, in particular, stands to benefit greatly from these capabilities. However, parallel programming systems such as CUDA introduce challenges for code transformation tools due to their reliance on low-level hardware...
Current optimization of ground Cherenkov telescopes arrays relies on brute-force approaches based on large simulations requiring both high amount of storage and long computation time. To explore the full phase space of telescope positioning of a given array even more simulations would be required. To optimize any array layout, we explore the possibility of developing a differential program...
In modern particle detectors, calorimeters provide critical energy measurements of particles produced in high-energy collisions. The demanding requirements of next-generation collider experiments would benefit from a systematic approach to the optimization of calorimeter designs. The performance of calorimeters is primarily characterized by their energy resolution, parameterized by a...
Differentiability in detector simulation can enable efficient and effective detector optimisation. We are developing an AD-enabled detector simulation of a liquid argon time projection chamber to facilitate simultaneous detector calibration through gradient-based optimisation. This approach allows us to account for the correlations of the detector modeling parameters comprehensively and avoid...
The increasing importance of high-granularity calorimetry in particle physics origins from its ability to enhance event reconstruction and jet substructure analysis. In particular, the identification of hadronic decays within boosted jets and the application of particle flow techniques have demonstrated the advantages of fine spatial resolution in calorimeters. In this study, we investigate...
In this work we consider the problem of determining the identity of hadrons at high energies based on the topology of their energy depositions in dense matter, along with the time of the interactions. Using GEANT4 simulations of a homogeneous lead tungstate calorimeter with high transverse and longitudinal segmentation, we investigated the discrimination of protons, positive pions, and...
Objective:
Proton therapy is an emerging approach in cancer treatment. A key challenge is improving the accuracy of Bragg-peak position calculations, which requires more precise relative stopping power (RSP) measurements. Proton computed tomography (pCT) is a promising technique, as it enables imaging under conditions identical to treatment by using the same irradiation device and hadron...
In this work we simulate hadrons impinging on a homogeneous lead-tungstate (PbWO4) calorimeter to investigate how the resulting light yield and its temporal structure, as detected by an array of light-sensitive sensors, can be processed by a neuromorphic computing system. Our model encodes temporal photon distributions in the form of spike trains and employs a fully connected spiking neural...
Setup design is a critical aspect of experiment development, particularly in high-energy physics, where decisions influence research trajectories for decades. Within the MODE Collaboration, we aim to generalize Machine Learning methodologies to construct a fully differentiable pipeline for optimizing the geometry of the Muon Collider Electromagnetic Calorimeter.
Our approach leverages...
Objective:
Proton therapy is an emerging approach in cancer treatment. A key challenge is improving the accuracy of Bragg-peak position calculations, which requires more precise relative stopping power (RSP) measurements. Proton computed tomography (pCT) is a promising technique, as it enables imaging under conditions identical to treatment by using the same irradiation device and hadron...
The point spread function (PSF) of an imaging system is the system's response to a point source. To encode additional information in microscopy images, we employ PSF engineering – namely, a physical modification of the standard PSF of the microscope by additional optical elements that perform wavefront shaping. In this talk I will describe how this method enables unprecedented capabilities in...
In this work we simulate hadrons impinging on a homogeneous lead-tungstate (PbWO4) calorimeter to investigate how the resulting light yield and its temporal structure, as detected by an array of light-sensitive sensors, can be processed by a neuromorphic computing system. Our model encodes temporal photon distributions in the form of spike trains and employs a fully connected spiking neural...
In this work we consider the problem of determining the identity of hadrons at high energies based on the topology of their energy depositions in dense matter, along with the time of the interactions. Using GEANT4 simulations of a homogeneous lead tungstate calorimeter with high transverse and longitudinal segmentation, we investigated the discrimination of protons, positive pions, and...
The design of calorimeters presents a complex challenge due to the large number of design parameters and the stochastic nature of physical processes involved. In high-dimensional optimization, gradient information is essential for efficient design. While first-principle based simulations like GEANT4 are widely used, their stochastic nature makes them non-differentiable, posing challenges in...
The energy calibration of calorimeters at collider experiments, such as the ones at the CERN Large Hadron Collider, is crucial for achieving the experiment’s physics objectives. Standard calibration approaches have limitations which become more pronounced as detector granularity increases. In this paper we propose a novel calibration procedure to simultaneously calibrate individual detector...
This work highlights the experimental framework employed to implement and validate Deep Deterministic Policy Gradient (DDPG) for controlling a Fabry-Perot (FP) optical cavity, a key component in interferometric gravitational-wave detectors. An initial focus is placed on the real-world setup characterisation, where high finesse values and mirror velocities introduce significant...
The integration of artificial intelligence (AI) into scientific research is reshaping discovery across disciplines—from protein folding and materials design to theorem proving. These advances mark AI’s evolution from a computational tool to an active participant in scientific exploration.
Quantum physics represents a particularly promising frontier for AI-driven discovery. As we push deeper...
The Meadusa (Multiple Readout Ultra-High Segmentation) Detector Concept is an innovative approach to address the unique challenges and opportunities presented by the future lepton colliders and beyond. The Meadusa concept prioritizes ultra-high segmentation and multi-modal data acquisition to achieve ultra-high spatial, timing and event structure precision in particle detection. By combining a...
Setup design is a critical aspect of experiment development, particularly in high-energy physics, where decisions influence research trajectories for decades. Within the MODE Collaboration, we aim to generalize Machine Learning methodologies to construct a fully differentiable pipeline for optimizing the geometry of the Muon Collider Electromagnetic Calorimeter.
Our approach leverages...
Medical imaging—including X-rays and MRI scans—is crucial for diagnostics and research. However, the development and training of AI diagnostic models are hindered by limited access to large, high-quality datasets due to privacy concerns, high costs, and data scarcity. Synthetic image generation via differentiable programming has emerged as an effective strategy to augment real datasets with...