22–26 Aug 2022
Rio de Janeiro
America/Sao_Paulo timezone

CONNECT - A neural network based framework for emulating cosmological observables and cosmological parameter inference

Not scheduled
20m
Rio de Janeiro

Rio de Janeiro

Vice-Governador Rúbens Berardo street, 100 - Gávea Rio de Janeiro - 22451-070
Plenary/Parallel talk Poster session

Speaker

Andreas Nygaard (Aarhus University)

Description

As numerical complexities of cosmological models are increasing in recent years, so too are the demands for resources when computing solutions to the Einstein-Boltzmann equations with codes like \textsc{class} and \textsc{camb}. A solution to this demand is, of course, more computational power through increasingly better and faster hardware, but perhaps another and more sustainable approach is emulating the Einstein-Boltzmann solver codes using a neural network. In doing so, we heavily decrease the time for each model evaluation, and a whole new world of parameter inference beyond Markov chain Monte Carlo opens.
In this talk I will present the new code \textsc{connect} introduced in Nygaard et al. (arXiv: 2205.15726), which is a framework for sampling training data and training a neural network of custom architecture to emulate the outputs of \textsc{class}. We found that the naïve approach of using a latin hypercube as training data leads to erroneous results in certain cases of complex likelihood shapes and it often requires a huge amount of data points, i.e. individual \textsc{class} computations, of orders $10^5$ to $10^6$. We thus propose another sampling method of training data based on an iterative process where we start from a rough latin hypercube and use the network trained on this to perform a high-temperature MCMC sampling resulting in new points in the parameter space to be included as training data for the next iteration. This process builds a representative set of training data and halts when the data reaches convergence. We can thus limit the number of class computations with one or two orders of magnitude, and by not having to accommodate regions of vanishing likelihood in the parameter space, the network is trained to be good only in the region of interest.

Author

Andreas Nygaard (Aarhus University)

Co-authors

Steen Hannestad (Aarhus University) Emil Brinch Holm Thomas Tram (Aarhus University)

Presentation materials

There are no materials yet.