Speaker
Description
Type Ia supernovae (SNae Ia), standardisable candles that allow tracing the expansion history of the Universe, are instrumental in constraining cosmological parameters, particularly dark energy. State-of-the-art likelihood-based analyses scale poorly to future large data sets, are limited to simplified probabilistic descriptions, and must explicitly sample a high-dimensional latent posterior to infer the few parameters of interest, which makes them inefficient. On the other hand, truncated marginal neural ratio estimation (TMNRE), an inference technique based on forward simulations, can fully account for complicated redshift uncertainties, contamination from non-SN Ia sources, selection effects, and a realistic instrumental model, while implicitly marginalising latent and population-level parameters to directly derive posteriors for the cosmological parameters of interest. We present an application of TMNRE to supernova cosmology in the context of BAHAMAS, a Bayesian hierarchical model for SALT parameters. We verify that TMNRE produces unbiased and precise posteriors for cosmological parameters from up to 100 000 SNae Ia. With minimal additional effort, we train a neural network to infer simultaneously the O(100 000) latent parameters of the supernovae (e.g. absolute brightnesses). Lastly, we present recent improvements to the simulator that allow it to realistically model light curves based on a probabilistic spectral energy distribution model (BayeSN), tailoring its output to current and near-future surveys. Analysing these much more complicated data requires the adoption of modern set-based neural network architectures and an extension of the truncation methodology to hierarchies of parameters, which we also discuss.