25–29 May 2026
Chulalongkorn University
Asia/Bangkok timezone

Uncovering Hidden Systematics in Neural Network Models for HEP

Not scheduled
1m
Chulalongkorn University

Chulalongkorn University

Poster Presentation Track 9 - Analysis software and workflows Poster

Speaker

Matthias Schott (CERN / University of Mainz)

Description

Neural networks (NNs) are inherently multidimensional classifiers that learn complex, non-linear relationships among input observables. While their flexibility enables unprecedented performance in high-energy physics (HEP) analyses, it also makes them sensitive to small variations in their inputs. Consequently, the propagation and estimation of systematic uncertainties in NN-based models remain an open challenge.
It is well known that uncertainties derived in control regions or from nominal variations of input features often underestimate the true model uncertainty, leaving potential biases unaccounted for. Inspired by insights from adversarial-attack studies in machine learning, we explore how subtle perturbations—fully consistent with the experimental uncertainties on the input observables—can lead to substantial changes in NN outputs, while keeping the one-dimensional and correlated input distributions nearly unchanged.

Using a set of representative HEP tasks, including event classification and object identification, and testing across a variety of network architectures, we demonstrate that networks can be systematically “fooled” at significant rates within the allowed uncertainty envelopes. Building on this observation, we introduce a quantitative framework to probe and measure the hidden sensitivity of neural networks to realistic experimental variations, providing a practical path to evaluate and control their systematic uncertainty in physics analyses.

Author

Matthias Schott (CERN / University of Mainz)

Presentation materials

There are no materials yet.