4–8 Nov 2024
LPNHE, Paris, France
Europe/Paris timezone

Bridging the Generative Unfolding Gap

7 Nov 2024, 14:10
20m
Amphi Charpak

Amphi Charpak

Speaker

Sascha Diefenbacher (Lawrence Berkeley National Lab. (US))

Description

Machine learning-based unfolding has started to establish itself as the go-to approach for precise, high-dimensional unfolding tasks. The current state-of-the-art unfolding methods can be divided into reweighting-based and generation-based methods. The latter of the two is comprised of conditional generative models, which generate new truth-level events from random noise conditioned on detector-level inputs, and of bridge-based models, which directly map events from detector- to truth-level.

Bridge-based models have always had the advantage of starting from a physically motivated distribution, rather than from random noise, placing their starting points innately closer to the desired results. However, the mappings learned by the bridges were often more akin to an optimal-transport mapping between detector-level and truth-level, rather than to the mapping prescribed by the detector.

We show recent developments in addressing this shortcoming and present a set of improved bridge models, which are able to learn the exact detector mapping, in the same way conditional generative models can, without sacrificing the inherent advantages of utilizing a physically motivated distribution. We demonstrate the efficacy of these new brides on a synthetic example set and on a Z+jets dataset.

Track Unfolding

Authors

Anja Butter (Centre National de la Recherche Scientifique (FR)) Ben Nachman (Lawrence Berkeley National Lab. (US)) Nathan Huetsch (Heidelberg University, ITP Heidelberg) Sascha Diefenbacher (Lawrence Berkeley National Lab. (US)) Sofia Palacios Schweitzer (ITP, University Heidelberg) Tilman Plehn Vinicius Massami Mikuni (Lawrence Berkeley National Lab. (US))

Presentation materials