Videos

Almost nearly perfect Bayesian inversion with physics-informed deep generative models: Applications to computational imaging

Presenter
March 2, 2026
Abstract
This talk introduces a novel mathematical and computational framework for constructing high-dimensional Bayesian inversion methods that leverage state-of-the-art generative denoising diffusion models as highly informative priors. A central innovation is the use of Langevin diffusion processes and Markov chain Monte Carlo sampling techniques to create stochastic neural network architectures that perform nearly-perfect sampling. The obtained networks are modular and composed of interpretable layers that are directly related to statistical image priors and data likelihoods. The layers encoding the data likelihood function are designed for flexibility, enabling observation model parameters to be specified at inference time and seamlessly integrated with pre-trained foundational generative priors. To achieve high computational efficiency, we employ adversarial model distillation, which yields excellent sampling performance with as few as four Markov chain Monte Carlo steps, even in problems exceeding one million dimensions. Our approach is validated through non-asymptotic convergence analysis and extensive numerical experiments in computational image and video restoration. The talk is based on recent work in physics-informed generative AI for Bayesian imaging: https://arxiv.org/abs/2503.12615, which uses a distilled latent Stable Diffusion XL model trained on five billion clean images as a zero-shot prior; and https://arxiv.org/pdf/2507.02686, which integrates pixel-based diffusion models with deep unfolding and diffusion distillation.
Supplementary Materials