Speaker
Description
Markov Chain Monte Carlo (MCMC) allows efficient estimation of observables in many lattice theories. However, as a critical point in parameter space is approached, typical MCMC algorithms suffer from critical slowing-down: autocorrelation lengths in the chain diverge for all observables, demanding increasingly more computational cost to achieve the same statistical power. In lattice QCD, for example, critical slowing-down presents a significant barrier to approaching the continuum and physical point. We present a new class of MCMC algorithms that allow systematic improvement of autocorrelation lengths by optimizing (training) a variational model. Specifically, a machine-learned normalizing flow is used to propose lattice configurations according to an approximate distribution that is made exact by a Metropolis accept/reject step. In this Markov chain, autocorrelation time of all observables is equal and we prove a bound on this autocorrelation according to a KL divergence between the machine-learned and true distributions. In a $\phi^4$ scalar field theory, we show observables produced by the proposed method agree with standard results and show control over the autocorrelation time in regions of paramter space where standard MCMC methods suffer from critical slowing-down.