High dimensional sampling meeting
Bob presented interesting material on active learning, and it raised several ideas for comparison with existing techniques. Once we define interesting test functions, we should compare (for limit setting):
1) Existing global fit approaches which are not trivially parallelisable.
2) An approach based on targeted sampling with active learning, wherein each batch of evaluations of the expensive cost function is trivially parallelisable (with some work between batches to choose where to sample the next N points).
For different problems, and for different assumed calculation times for the cost function, which of these approaches yields the quickest estimate of, e.g. likelihood contours?
The week of February 4th to 8th will see some concentrated effort on pyBAMBI in Cambridge. Martin will email the list near the start of the week to canvas a quick meeting of interested parties, and will also keep the list abreast of developments. We may also use this time to define particular test cases for samplers.
Ben Farmer volunteered to understand MCMC with synthetic likelihoods (the link sent round by Joaquin).
Sascha or a colleague will present arXiv:1901.00875 in the next meeting.
A potential basis to use as a software steering framework exists here: http://ccb.nki.nl/software/bcm/. People are encouraged to play with this before the next meeting to see if this is suitable for our purposes. We need a volunteer to create and manage a github repo for our steering framework.
The next meeting will involve a doodle poll, to be distributed after the week of February 4th to 8th.