Speaker
Description
The unfolding of detector effects impacting experimental measurements is crucial for the comparison of data to theory predictions. While traditional methods were limited to low dimensional data, machine learning has enabled new tech- niques to unfold high-dimensional data. Generative networks like conditional Invertible Neural Networks (cINN) enable a probabilistic unfolding, which map individual events to their corresponding unfolded probability distribution. The precision of this method is however limited by the similarity between simulated training data and the measurement we want to unfold. We therefore introduce an improved version of the cINN Unfolding by combining it with an iterative reweighting which adjusts for deviations between simulation and data. We validate the performance on toy data and an EFT-dependent example.