Full statistical models encapsulate the complete information of an experimental result, including the likelihood function given observed data. Their proper publication is of vital importance for a long lasting legacy of HEP experiments. However, statistical models are often high-dimensional complex functions, which are not straightforward to parametrize. In the context of LHC results, even the full likelihoods are composed by a number of parameters of interest and nuisance parameters that can easily be of the order of hundreds. Thus, we proposed to describe them with Normalizing Flows (NFs), a modern type of generative networks that explicitly learn the probability density distribution. As a proof of concept we focused on two likelihoods from global fits to SM observables and a likelihood of a NP-like search, obtaining great results for all of them. Furthermore, to make sure that we can systematically make use of NFs for likelihood learning, we performed a general study where we tested several types of flows against different types of distributions, with scaling complexity and dimensionality. The study showed that, in particular, the so-called neural spline flows can effciently describe even the most complex probability density functions we implemented. Furthermore, we hope that our proposal can be useful not only for publishing likelihoods from LHC analyses, but also those from phenomenological studies or from other types of experiments.