Speaker
Description
Data compression plays a major role in the field of Machine Learning and recent works based on generative models such as Generative Adversarial Networks (GANs) have shown that deep-learning-based compression can outperform state-of-the-art classical compression methodologies. Such techniques can be adapted and applied to various areas in high energy physics, in particular to the study of the Parton Distribution Functions (PDFs) in which large Monte Carlo replicas samples are required in order to get accurate results. In this talk, we present a compression algorithm for parton densities in which the statistics of a given input PDF set is further enhanced by the generation of synthetic replicas using GAN prior to compression. This results in a compression methodology that is able to provide a compressed set with smaller number of replicas and a more adequate representation of the original probability distribution.
Affiliation | University of Milan |
---|---|
Academic Rank | PhD Student |