Videos

Sparsity Promoting Scale Mixture Priors Fit Large Imaging Datasets

Presenter
March 4, 2026
Abstract
Sparsity promoting priors have been widely proposed as the Bayesian extension of standard regularizers used for sparse inference. We study a family of hierarchical priors developed for sparse recovery in medical imaging. These are generalized-gamma scale mixtures of normals. This family has been proposed as an attractive alternative to standard sparsity promoting priors since it offers separate control over tail and peak behavior, so allows independent modeling of very small, and very large, entries, while only requiring two shape and one scale parameter. It includes all of the standard $\ell_p$ priors, as well as the Gaussian, Laplacian, and $t$ priors as special cases, though does not include priors that enforce an elastic-net or horshoe penalty. When applied to signals, it can represent Gaussian Process (GP) and $\alpha$-stable processes as special cases. It's hierarchical structure allows scalable inference and uncertainty quantification via coordinate blocking procedures. We provide the first large-scale empirical evidence that the prior family can fit a wide variety of image and audio datasets that vary by source, scale, and representation. We show that the prior provides accurate fits to speech recordings, remote sensing images of cities and agriculture, ice-bed topography, sea floor magnetic anomalies, surface air temperature maps, simulated brain MRI scans, and two large natural image datasets (COCO and SegmentAnything) under 7 different representations (Fourier, Gabor, Haar, AlexNet, Short-time Fourier, Continuous Wavelet, Erblet). The standard priors (e.g. GP priors) included as special cases fail to fit the majority of cases captured by the hierarchical prior. We contrast our results to a small set of images widely adopted as traditional benchmarks and explore signal features that cannot be described using the chosen prior.
Supplementary Materials