arrow
Volume 39, Issue 6
An Acceleration Strategy for Randomize-Then-Optimize Sampling via Deep Neural Networks

Liang Yan & Tao Zhou

J. Comp. Math., 39 (2021), pp. 848-864.

Published online: 2021-10

Export citation
  • Abstract

Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel goal-oriented deep neural networks (DNN) surrogate approach to substantially reduce the computation burden of RTO. In particular, we propose to draw the training points for the DNN-surrogate from a local approximated posterior distribution — yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach. We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO approach, which shows that DNN-RTO can significantly outperform the traditional RTO.

  • AMS Subject Headings

35R30, 62F15, 65C60, 68T05

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address

yanliang@seu.edu.cn (Liang Yan)

tzhou@lsec.cc.ac.cn (Tao Zhou)

  • BibTex
  • RIS
  • TXT
@Article{JCM-39-848, author = {Yan , Liang and Zhou , Tao}, title = {An Acceleration Strategy for Randomize-Then-Optimize Sampling via Deep Neural Networks}, journal = {Journal of Computational Mathematics}, year = {2021}, volume = {39}, number = {6}, pages = {848--864}, abstract = {

Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel goal-oriented deep neural networks (DNN) surrogate approach to substantially reduce the computation burden of RTO. In particular, we propose to draw the training points for the DNN-surrogate from a local approximated posterior distribution — yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach. We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO approach, which shows that DNN-RTO can significantly outperform the traditional RTO.

}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.2102-m2020-0339}, url = {http://global-sci.org/intro/article_detail/jcm/19914.html} }
TY - JOUR T1 - An Acceleration Strategy for Randomize-Then-Optimize Sampling via Deep Neural Networks AU - Yan , Liang AU - Zhou , Tao JO - Journal of Computational Mathematics VL - 6 SP - 848 EP - 864 PY - 2021 DA - 2021/10 SN - 39 DO - http://doi.org/10.4208/jcm.2102-m2020-0339 UR - https://global-sci.org/intro/article_detail/jcm/19914.html KW - Bayesian inverse problems, Deep neural network, Markov chain Monte Carlo. AB -

Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel goal-oriented deep neural networks (DNN) surrogate approach to substantially reduce the computation burden of RTO. In particular, we propose to draw the training points for the DNN-surrogate from a local approximated posterior distribution — yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach. We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO approach, which shows that DNN-RTO can significantly outperform the traditional RTO.

Yan , Liang and Zhou , Tao. (2021). An Acceleration Strategy for Randomize-Then-Optimize Sampling via Deep Neural Networks. Journal of Computational Mathematics. 39 (6). 848-864. doi:10.4208/jcm.2102-m2020-0339
Copy to clipboard
The citation has been copied to your clipboard