Monte Carlo simulations are a vital part of modern particle physics. However classical approaches to these simulations require a vast amount of computational resources. Generative Machine Learning models offer a chance to reduce this strain on computing capabilities by allowing us to generate simulated data at a significantly greater speed. The applicability of such generative models has been demonstrated for many problems in particle physics, ranging from event generation to fast calorimeter simulation to many more.
However, one question that needs to be addressed before we can fully utilise generative models is whether a generative model can achieve a more precise description of a given underlying distribution than the data the model was originally trained on. We explore this using a simple toy example and show that a generative model can indeed be used to amplify a data set.