Generative machine learning models have been successfully applied to many problems in particle physics, ranging from event generation to fast calorimeter simulation to many more. This indicates that generative models have the potential to become a mainstay in many simulation chains. However, one question that still remains is whether a generative model can have increased statistical precision compared to the data it was trained on. I.e. whether one can meaningfully draw more samples from a generative model than the ones it was trained with. We explore this using three examples and demonstrate that generative models indeed have the capability to amplify data sets.
Based on arxiv.org/abs/2008.06545