Speaker
Description
Fast event and detector simulation in high-energy physics using generative models provides a viable solution for generating sufficient statistics within a constrained computational budget, particularly in preparation for the High Luminosity LHC. However, many of these applications suffer from a quality/speed tradeoff. Diffusion models offer some of the best sampling quality but slow generation with large sampling steps. In our study, we replaced the traditional neural network backbone with a GBDT-based backbone to specifically address unstructured tabular data. This results in training and inference times for most high-level simulation tasks being sped up by orders of magnitude. The application can be extended to low-level feature simulation and conditioned generation with competitive performance. We also conducted a comprehensive scan of most mainstream samplers for standard score matching diffusion, achieving an O(10) speedup with training-free methods. The new signal-to-noise ratio weighting and step-aware scheduler fine-tuning methods are introduced to enable most ODE samplers to perform well with around 10 evaluation steps.
Track | Detector simulation & event generation |
---|