Conveners
Track 3: Computations in Theoretical Physics: Techniques and Methods
- chair: Chiara Signorile
- co-chair: Aishik Ghosh
Track 3: Computations in Theoretical Physics: Techniques and Methods
- chair: Nina Elmer
- co-chair: Chiara Signorile
Track 3: Computations in Theoretical Physics: Techniques and Methods
- chair: Tianji Cai
- co-chair: Theo Heimel
Track 3: Computations in Theoretical Physics: Techniques and Methods
- chair: Chiara Signorile
- co-chair: Ramon Winterhalder
Track 3: Computations in Theoretical Physics: Techniques and Methods
- chair: Joshua Davis
- co-chair: Tianji Cai
Track 3: Computations in Theoretical Physics: Techniques and Methods
- chair: Chiara Signorile
- co-chair: Enrico Bothmann
Track 3: Computations in Theoretical Physics: Techniques and Methods
- co-chair: Chiara Signorile
- chair: Timo Janßen
Accurate and efficient predictions of scattering amplitudes are essential for precision studies in high-energy physics, particularly for multi-jet processes at collider experiments. In this work, we introduce a novel neural network architecture designed to predict amplitudes for multi-jet events. The model leverages the Catani–Seymour factorization scheme and uses MadGraph to compute...
Direct simulation of multi-parton QCD processes at full-color accuracy is computationally expensive, making it often impractical for large-scale LHC studies. A two-step approach has recently been proposed to address this: events are first generated using a fast leading-color approximation and reweighted to full-color accuracy. We build upon this strategy by introducing a machine-learning...
One of the central tools in hadron spectroscopy is amplitude analysis (partial-wave analysis) to interpret the experimental data. Amplitude models are fitted to data with large statistics to extract information about resonances and branching fractions. In amplitude analysis, we require flexibility to implement models with different decay hypotheses, spin formalisms, and resonance...
I will present work on Tropical sampling from Feynman measures:
We introduce an algorithm that samples a set of loop momenta distributed as a given Feynman integrand. The algorithm uses the tropical sampling method and can be applied to evaluate phase-space-type integrals efficiently. We provide an implementation, momtrop, and apply it to a series of relevant integrals from the...
For several decades, the FORM computer algebra system has been a crucial software package for the large-scale symbolic manipulations required by computations in theoretical high-energy physics. In this talk I will present version 5, which includes an updated built-in diagram generator, greatly improved polynomial arithmetic performance through an interface to FLINT, and enhanced capabilities...
We present the program package ${\tt ggxy}$, which in its first version can be used to calculate partonic and hadronic cross sections to Higgs boson pair production at NLO QCD. The 2-loop virtual amplitudes are implemented using analytical approximations in different kinematic regions, while all other parts of the calculation are exact. This implementation allows to freely modify the masses of...
Significant computing resources are used for parton-level event generation for the Large Hadron Collider (LHC). The resource requirements of this part of the simulation toolchain are expected to grow further in the High-Luminosity (HL-LHC) era. At the same time, the rapid deployment of computing hardware different from the traditional CPU+RAM model in data centers around the world mandates a...
Physics programs at future colliders cover a wide range of diverse topics and set high demands for precise event reconstruction. Recent analyses have stressed the importance of accurate jet clustering in events with low boost and high jet multiplicity. This contribution present how machine learning can be applied to jet clustering while taking desired properties such as infrared and collinear...
The process of neutrino model building using flavor symmetries requires a physicist to select a group, determine field content, assign representations, construct the Lagrangian, calculate the mass matrices matrix, and perform statistical fits of the resulting free parameters. This process is constrained by the physicist's time and their intuition regarding mathematically complex groups,...
Fast and precise evaluations of scattering amplitudes even
in the case of precision calculations is essential for event generation
tools at the HL-LHC. We explore the scaling behavior of the achievable
precision of neural networks in this regression problem for multiple
architectures, including a Lorentz symmetry aware multilayer perceptron
and the L-GATr architecture. L-GATr is...
Modern ML-based taggers have become the gold standard at the LHC, outperforming classical algorithms. Beyond pure efficiency, we also seek controllable and interpretable algorithms. We explore how we can move beyond black-box performance and toward physically meaningful understanding of modern taggers. Using explainable AI methods, we can connect tagger outputs with well-known physics...
AI for fundamental physics is now a burgeoning field, with numerous efforts pushing the boundaries of experimental and theoretical physics, as well as machine learning research itself. In this talk, I will introduce a recent innovative application of Natural Language Processing to the state-of-the-art precision calculations in high energy particle physics. Specifically, we use Transformers to...
ATLAS explores modern neural networks for a multi-dimensional calibration of its calorimeter signal defined by clusters of topologically connected cells (topo-clusters). The Bayesian neural network (BNN) approach yields a continuous and smooth calibration function, including uncertainties on the calibrated energy per topo-cluster. In this talk the performance of this BNN-derived calibration is...
Neural networks for LHC physics must be accurate, reliable, and well-controlled. This requires them to provide both precise predictions and reliable quantification of uncertainties - including those arising from the network itself or the training data. Bayesian networks or (repulsive) ensembles provide frameworks that enable learning systematic and statistical uncertainties. We investigate...
One of the main goals of theoretical nuclear physics is to provide a first-principles description of the atomic nucleus, starting from interactions between nucleons (protons and neutrons). Although exciting progress has been made in recent years thanks to the development of many-body methods and nucleon-nucleon interactions derived from chiral effective field theory, performing accurate...
Significant efforts are currently underway to improve the description of hadronization using Machine Learning. While modern generative architectures can undoubtedly emulate observations, it remains a key challenge to integrate these networks within principled fragmentation models in a consistent manner. This talk presents developments in the HOMER method for extracting Lund fragmentation...
We present a quantum generative model that extends Quantum Born Machines (QBMs) by incorporating a parametric Polynomial Chaos Expansion (PCE) to encode classical data distributions. Unlike standard QBMs relying on fixed heuristic data-loading strategies, our approach employs a trainable Hermite polynomial basis to amplitude-encode classical data into quantum states. These states are...
We apply for the first time the Flow Matching method to the problem of phase-space sampling for event generation in high-energy collider physics. By training the model to remap the random numbers used to generate the momenta and helicities of the collision matrix elements as implemented in the portable partonic event generator Pepper, we find substantial efficiency improvements in the studied...
MadEvent7 is a new modular phase-space generation library written in C++ and CUDA, running on both GPUs and CPUs. It features a variety of different phase-space mappings, including the classic MadGraph multi-channel phase space and an optimized implementation of normalizing flows for neural importance sampling, as well as their corresponding inverse mappings. The full functionality is...
Modern approaches to phase-space integration combine well-established Monte Carlo methods with machine learning techniques for importance sampling. Recent progress in generative models in the form of continuous normalizing flows, trained using conditional flow matching, offers the potential to improve the phase-space sampling efficiency significantly.
We present a multi-jet inclusive...
One primary goal of the LHC is the search for physics beyond the Standard Model, leading to the development of many different methods to look for new physics effects. In this context, we employ Machine Learning methods, in particular we explore the applications of Simulation-Based Inference (SBI), to learn otherwise intractable likelihoods and fully exploit the information available, compared...
We discuss recent developments in performance improvements for Monte Carlo integration and event sampling. (1) Massive parallelization of matrix element evaluations based on a new back end for the matrix element generator O'Mega targeting GPUs. This has already been integrated in a development version of the Monte Carlo event generator Whizard for realistic testing and profiling. (2) A...
To characterize the structures and properties of samples in the analysis of experimental data of Small-Angle Neutron Scattering (SANS), a physical model must be selected corresponding to each sample for iterative fitting. However, the conventional method of model selection is primarily based on manual experience, which has a high threshold for analysis and low accuracy. Furthermore, the...
Foundation models are a very successful approach to linguistic tasks. Naturally, there is the desire to develop foundation models for physics data. Currently, existing networks are much smaller than publicly available Large Language Models (LLMs), the latter having typically billions of parameters. By applying pretrained LLMs in an unconventional way, we introduce large networks for...
The increasing complexity of modern neural network architectures demands fast and memory-efficient implementations to mitigate computational bottlenecks. In this work, we evaluate the recently proposed BitNet architecture in HEP applications, assessing its performance in classification, regression, and generative modeling tasks. Specifically, we investigate its suitability for quark-gluon...
We present FeynGraph, a modern high-performance Feynman diagram generator designed to integrate seamlessly with modern computational workflows to calculate scattering amplitudes. FeynGraph is designed as a high-performance Rust library with easy-to-use Python bindings, allowing it to be readily used in other tools. With additional features like arbitrary custom diagram selection filters and...
Non perturbative QED is used in calculations of Schwinger pair creation, in precision QED tests with ultra-intense lasers and to predict beam backgrounds at the interaction point of colliders. In order to predict these phenomena, custom built monte carlo event generators based on a suitable non perturbative theory have to be developed. One such suitable theory uses the Furry Interaction...
The determination of the hot QCD pressure has a long history, and has -- due to its phenomenological relevance in cosmology, astrophysics and heavy-ion collisions -- spawned a number of important theoretical advances in perturbative thermal field theory applicable to equilibrium thermodynamics.
We present major progress towards the determination of the last missing piece for the pressure of...
Random matrix theory has a long history of applications in the study of eigenvalue distributions arising in diverse real-world ensembles of matrix data. Matrix models also play a central role in theoretical particle physics, providing tractable mathematical models of gauge-string duality, and allowing the computation of correlators of invariant observables in physically interesting sectors of...
Gravitational Wave (GW) Physics has entered a new era of Multi-Messenger Astronomy (MMA), characterized by increasing detections from GW observatories such as the LIGO, Virgo, and KAGRA collaborations. This presentation will introduce the KAGRA experiment, outlining the current workflow from data collection to physics interpretation, and demonstrate the transformative role of machine learning...