High dimensional sampling meeting

Europe/Zurich
Other Institutes

Other Institutes

We had two talks on techniques that were highlighted in the last meeting:

1) BAMBI (Will Handley). Will told us that BAMBI is really out-of-date (e.g. the neural net implementation from SkyNet has long been superceded by better things, Multinest has also moved on a lot from the code that BAMBI was implemented in). There is much interest in reviving it, however. Will suggests rewriting BAMBI in pure python, and  Ben, Judita, Joaquin, Melissa, Eduardo, Roberto, Nathan all expressed an interest (other volunteers are welcome). Will is able to coordinate the effort, and the suggestion is that this will be a section of the paper that emerges from the comparison with other techniques. There is a potential for some dedicated coding time early in the new year (Martin will be in Cambridge for 1-2 weeks).

Pat asked about how exactly to understand the validity of the neural net estimation of the likelihood, particularly whether there should be some locality check (not just a calculation based on the likelihood). This needs further discussion.

2) Eduardo talked about Bayesian optimization, which is useful when likelihood calculations are expensive. Replacing the Gaussian Process model with a random forest might be good for functions that aren't smooth, but the random forest doesn't extrapolate nicely. A deep neural net provides another option. Joaquin and Martin resolved to read some literature on this and report back.

There was a suggestion for these talks in the next meeting:

a) A talk on Diver (could ask Pat Scott)?
b) A talk on active learning (Sascha to follow up?)
c) Ben to present demo of python interface to GAMBIT scanners.

Roberto and Sascha recommend using Tom Heske's sampling framework (python), will follow up and present update in the next meeting.

Judita and Csaba to look into metrics for comparing the efficacy of scanners. Csaba has found previous literature:
Mary Kathryn Cowles and Bradley P. Carlin Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review https://www.jstor.org/stable/2291683 http://people.ee.duke.edu/~lcarin/cowles96markov.pdf Stephen P. BROOKS and Andrew GELMAN General Methods for Monitoring Convergence of Iterative Simulations https://www.tandfonline.com/doi/abs/10.1080/10618600.1998.10474787 http://www.stat.columbia.edu/~gelman/research/published/brooksgelman2.pdf Eric B. Ford Convergence Diagnostics For Markov chain Monte Carlo http://astrostatistics.psu.edu/RLectures/diagnosticsMCMC.pdf

Also: GAMBIT ScannerBit paper. There was also a suggestion to do a really high resolution scan, then do KL divergence tests from lower resolution scans (and in toy example cases we actually know the posterior).

Functions: Take functions in Multinest paper as a starting point.

Next meeting is proposed in 3 weeks from now.

 

There are minutes attached to this event. Show them.
    • 10:30 10:40
      Introduction 10m
      Speaker: Martin John White (University of Adelaide (AU))
    • 10:40 11:00
      BAMBI 20m
      Speaker: Will Handley (University of Cambridge)
    • 11:00 11:20
      Bayesian Optimization and Gaussian Processes 20m
      Speaker: Eduardo Garrido Merchan