Speakers
Description
HEP experiments heavily rely on the production and the storage of large datasets of simulated events. At the LHC, simulation workflows require about half of the available computing resources of a typical experiment. With the foreseen High Luminosity LHC upgrade, data volume and complexity are going to increase faster than the expected improvements in computing infrastructure. Speeding up the simulation workflow would be of crucial importance. Deep Generative models could make simulation-on-demand a possibility, reducing computing and storage needs. In this context, we study the use of Deep Variational Autoencoders (VAE) for a fast simulation of jets at the LHC. Different Variational Autoencoder paradigms are investigated, and application-specific choices for data representation and the loss function are employed that better fit the nature of the jet physics data. Each jet is represented as a list of particles characterized by their momenta, and a customized version of a permutation-invariant nearest-neighbor distance is tested for the reconstruction loss function to improve accuracy.
Speaker time zone | Compatible with Europe |
---|