Videos

Computationally efficient inference for sparsity-promoting hierarchical Bayesian models

Presenter
March 4, 2026
Abstract
Hierarchical sparsity-promoting priors play a central role in Bayesian inverse problems, enabling adaptive regularization and uncertainty quantification for problems with sparse or piecewise-smooth unknowns. In particular, hierarchical sparsity-promoting scale-mixtures-of-normals models combine conditionally Gaussian priors with heavy-tailed hyperpriors, yielding flexible hierarchical formulations—but also posterior distributions that are high-dimensional, strongly correlated, and often multimodal, making efficient MCMC sampling challenging. In this talk, I will present an approach to accelerate MCMC inference in hierarchical sparsity-promoting models via hierarchical prior normalization. The idea is to construct analytic transport maps that transform the full joint sparsity-promoting prior into a standard normal reference prior. I will further demonstrate how sampling the resulting prior-normalized posterior enables the use of efficient, structure-exploiting MCMC methods—such as elliptical slice sampling—and leads to improved mixing and robustness compared to conventional sampling from the original posterior, across a range of linear and nonlinear inverse problems This talk is based on joint work with Youssef Marzouk (MIT) and Jonathan Lindbloom (Dartmouth College).
Supplementary Materials