Speaker
Description
In recent years, deep generative models (DGMs) have become essential for various steps in the LHC simulation and analysis chain. While there are many types of DGMs, no Swiss-army-knife architecture exists that can effectively handle speed, precision, and control simultaneously. In this talk, I will explore different DGMs, outline their strengths and weaknesses, and illustrate typical applications in high-energy physics. Moreover, I will introduce several methods to quantify and enhance model quality, including Bayesian neural networks for uncertainty estimation, the classifier test to identify failure modes, and reweighting using the DCTR approach. Lastly, to spark further discussion, I will discuss the *amplification methods for addressing the question, "How many more events can a DGM generate?"