Videos

Fast Mixing MCMC without Gradients

Presenter
March 5, 2026
Abstract
In this talk, I will revisit Multilevel Delayed Acceptance (MLDA) an efficient Markov chain Monte Carlo (MCMC) method for high-dimensional Bayesian inverse problems that does not require gradient information of the underlying posterior distribution. Instead, it uses a hierarchy of surrogates to accelerate convergence, e.g., coarser approximations of the likelihood. In previous works, we have demonstrated the fast convergence and the high effective sample size for MLDA, as well as multilevel variance reduction. In recent theoretical works, it has been shown that including gradient information allows to improve the upper bound on the total-variation mixing-time of MCMC from O(𝑑𝜅^2) for Random Walk Metropolis to O(𝑑𝜅) for MALA or O(𝑑^{11/12}𝜅) for HMC. Here, 𝜅 = 𝐿/𝑚 is the condition number of a 𝑚-strongly convex and 𝐿-smooth posterior distribution. We prove that for a suitable hierarchy of surrogates the same speedup as for MALA can be achieved by MLDA without explicit gradient evaluations. This requires two modifications of the surrogate densities in MLDA, akin to Tikhonov regularisation and tempering. Beyond these theoretical results, our numerical experiments indicate that successive states of the chain exhibit substantially reduced autocorrelation, even compared to the original MLDA. Numerical evidence suggests that incorporating tempering into the surrogate densities also improves robustness with respect to multimodality of the target density.
Supplementary Materials