Speaker
Description
Cosmological data from the next generation of cosmological surveys such as Euclid and LSST have the statistical power to distinguish modified gravity or dark energy models from LCDM, as well as determining neutrino masses. But with excellent data come formidable challenges in analysis. In this talk I report on recent advances that attempt to ensure that any evidence for new physics is well grounded. The natural Bayesian end-point of an experiment to determine parameters of a theoretical model is the posterior - the probability density of the parameters given the data. It encompasses everything that we know after collecting the data. It is not usually a straightforward quantity to compute, and often the only viable way is to build a hierarchical model for the data, where all sources of variability, from populations to measurement errors, are included. To build a relatively complete model of the data involves introducing very large numbers of unmeasured ‘latent’ variables, such as the true values of noisy data. Sampling the posterior then involves sampling a very high- (typically million-) dimensional parameter space. For cosmic shear, most of the parameters are the true values of the distortion in pixels on the sky. With efficient HMC samplers, the sampling can be done, to recover simultaneously cosmological parameters and samples of the mass maps on the sky.