Dark Machines sampling meeting
Bob updated us on the status of the python framework for standardising sampler comparisons. There is a github package forming here:
https://github.com/DarkMachines/high-dimensional-sampling
So far, there is a directory structure for the package, and everything required except the package! Bob will be able to get this done within a month, and will contact Joer to see how to go forward.
Names were collected for the different algorithms that were discussed at the Trieste meeting:
https://docs.google.com/document/d/16uofNDeEHCkpfJC2CqVhhhSShCO2suoPL2vRk5C7CiE/edit#
For each algorithm, the relevant volunteer needs to:
1) Write a short description of the technique that is comprehensible to a wide audience.
2) Interface the technique with the Python framework when it exists.
3) Find the optimum parameters of the technique for each given test function.
Almost all of these are covered, but we need to request involvement for a couple of them (Martin will do this).
Judita will look into metrics for comparing scanners. We need to review the literature for useful metrics for Bayesian case (the Frequentist case is probably covered by the approach in the ScannerBit paper).
Csaba has ideas which he can develop on a quantitative measure of convergence, inpired by Monte Carlo integration. Martin suggests calculating the expectation values of each parameter with respect to the probability density function. Do this for test functions where we know the expectation values. Pat: need to repeat each run to get the uncertainty on the expectation values.
Martin will prepare a paper skeleton so that people can start drafting explanations of each technique.