Preconditioned and iterative flow matching
Presenter
March 3, 2026
Abstract
Flow matching and score-based diffusion train vector fields under intermediate distributions $p_t$, whose geometry can strongly affect their optimization. We show that the covariance $\Sigma_t$ of $p_t$ governs optimization bias: when
$\Sigma_t$ is ill-conditioned, gradient-based training rapidly fits high-variance directions while systematically under-optimizing low-variance modes, causing learning to plateau at suboptimal weights. We formalize this effect in analytically tractable settings and propose reversible, label-conditional \emph{preconditioning} maps that reshape the geometry of $p_t$ by improving the conditioning of $\Sigma_t$ without altering the underlying generative model. Rather than accelerating early convergence, preconditioning primarily mitigates optimization stagnation by enabling continued progress along previously suppressed directions. Across MNIST latent flow matching and \textcolor{red}{additional datasets}, we empirically track conditioning diagnostics, late-time errors, and distributional metrics, and show that preconditioning consistently yields better trained models by avoiding suboptimal plateaus.