TY - JOUR
T1 - Adaptive dimension reduction to accelerate infinite-dimensional geometric Markov Chain Monte Carlo
AU - Lan, Shiwei
N1 - Funding Information:
SL was supported by the DARPA funded program Enabling Quantification of Uncertainty in Physical Systems (EQUiPS), contract W911NF-15-2-0121 when the paper was written. We thank EQUiPS team for sharing the FEniCS codes for solving k - ε RANS equations (for both forward and adjoint problems), and especially Umberto Villa at the University of Texas-Austin for numerous help. We also thank the anonymous reviewers for the constructive comments that help to improve the manuscript.
Publisher Copyright:
© 2019 Elsevier Inc.
PY - 2019/9/1
Y1 - 2019/9/1
N2 - Bayesian inverse problems highly rely on efficient and effective inference methods for uncertainty quantification (UQ). Infinite-dimensional MCMC algorithms, directly defined on function spaces, are robust under refinement (through discretization, spectral approximation) of physical models. Recent development of this class of algorithms has started to incorporate the geometry of the posterior informed by data so that they are capable of exploring complex probability structures, as frequently arise in UQ for PDE constrained inverse problems. However, the required geometric quantities, including the Gauss-Newton Hessian operator or Fisher information metric, are usually expensive to obtain in high dimensions. On the other hand, most geometric information of the unknown parameter space in this setting is concentrated in an intrinsic finite-dimensional subspace. To mitigate the computational intensity and scale up the applications of infinite-dimensional geometric MCMC (∞-GMC), we apply geometry-informed algorithms to the intrinsic subspace to probe its complex structure, and simpler methods like preconditioned Crank-Nicolson (pCN) to its geometry-flat complementary subspace. In this work, we take advantage of dimension reduction techniques to accelerate the original ∞-GMC algorithms. More specifically, partial spectral decomposition (e.g. through randomized linear algebra) of the (prior or Gaussian-approximate posterior) covariance operator is used to identify certain number of principal eigen-directions as a basis for the intrinsic subspace. The combination of dimension-independent algorithms, geometric information, and dimension reduction yields more efficient implementation, (adaptive) dimension-reduced infinite-dimensional geometric MCMC. With a small amount of computational overhead, we can achieve over 70 times speed-up compared to pCN using a simulated elliptic inverse problem and an inverse problem involving turbulent combustion with thousands of dimensions after discretization. A number of error bounds comparing various MCMC proposals are presented to predict the asymptotic behavior of the proposed dimension-reduced algorithms.
AB - Bayesian inverse problems highly rely on efficient and effective inference methods for uncertainty quantification (UQ). Infinite-dimensional MCMC algorithms, directly defined on function spaces, are robust under refinement (through discretization, spectral approximation) of physical models. Recent development of this class of algorithms has started to incorporate the geometry of the posterior informed by data so that they are capable of exploring complex probability structures, as frequently arise in UQ for PDE constrained inverse problems. However, the required geometric quantities, including the Gauss-Newton Hessian operator or Fisher information metric, are usually expensive to obtain in high dimensions. On the other hand, most geometric information of the unknown parameter space in this setting is concentrated in an intrinsic finite-dimensional subspace. To mitigate the computational intensity and scale up the applications of infinite-dimensional geometric MCMC (∞-GMC), we apply geometry-informed algorithms to the intrinsic subspace to probe its complex structure, and simpler methods like preconditioned Crank-Nicolson (pCN) to its geometry-flat complementary subspace. In this work, we take advantage of dimension reduction techniques to accelerate the original ∞-GMC algorithms. More specifically, partial spectral decomposition (e.g. through randomized linear algebra) of the (prior or Gaussian-approximate posterior) covariance operator is used to identify certain number of principal eigen-directions as a basis for the intrinsic subspace. The combination of dimension-independent algorithms, geometric information, and dimension reduction yields more efficient implementation, (adaptive) dimension-reduced infinite-dimensional geometric MCMC. With a small amount of computational overhead, we can achieve over 70 times speed-up compared to pCN using a simulated elliptic inverse problem and an inverse problem involving turbulent combustion with thousands of dimensions after discretization. A number of error bounds comparing various MCMC proposals are presented to predict the asymptotic behavior of the proposed dimension-reduced algorithms.
KW - Bayesian inverse problems
KW - Dimension reduction
KW - High-dimensional sampling
KW - Infinite-dimensional geometric Markov Chain Monte Carlo
KW - Uncertainty quantification
UR - http://www.scopus.com/inward/record.url?scp=85064973740&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85064973740&partnerID=8YFLogxK
U2 - 10.1016/j.jcp.2019.04.043
DO - 10.1016/j.jcp.2019.04.043
M3 - Article
AN - SCOPUS:85064973740
SN - 0021-9991
VL - 392
SP - 71
EP - 95
JO - Journal of Computational Physics
JF - Journal of Computational Physics
ER -