- Journal Home
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 27 (2020), pp. 33-69.
Published online: 2019-10
Cited by
- BibTex
- RIS
- TXT
An efficient algorithm is proposed for Bayesian model calibration, which is commonly used to estimate the model parameters of non-linear, computationally expensive models using measurement data. The approach is based on Bayesian statistics: using a prior distribution and a likelihood, the posterior distribution is obtained through application of Bayes' law. Our novel algorithm to accurately determine this posterior requires significantly fewer discrete model evaluations than traditional Monte Carlo methods. The key idea is to replace the expensive model by an interpolating surrogate model and to construct the interpolating nodal set maximizing the accuracy of the posterior. To determine such a nodal set an extension to weighted Leja nodes is introduced, based on a new weighting function. We prove that the convergence of the posterior has the same rate as the convergence of the model. If the convergence of the posterior is measured in the Kullback–Leibler divergence, the rate doubles. The algorithm and its theoretical properties are verified in three different test cases: analytical cases that confirm the correctness of the theoretical findings, Burgers' equation to show its applicability in implicit problems, and finally the calibration of the closure parameters of a turbulence model to show the effectiveness for computationally expensive problems.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2018-0218}, url = {http://global-sci.org/intro/article_detail/cicp/13313.html} }An efficient algorithm is proposed for Bayesian model calibration, which is commonly used to estimate the model parameters of non-linear, computationally expensive models using measurement data. The approach is based on Bayesian statistics: using a prior distribution and a likelihood, the posterior distribution is obtained through application of Bayes' law. Our novel algorithm to accurately determine this posterior requires significantly fewer discrete model evaluations than traditional Monte Carlo methods. The key idea is to replace the expensive model by an interpolating surrogate model and to construct the interpolating nodal set maximizing the accuracy of the posterior. To determine such a nodal set an extension to weighted Leja nodes is introduced, based on a new weighting function. We prove that the convergence of the posterior has the same rate as the convergence of the model. If the convergence of the posterior is measured in the Kullback–Leibler divergence, the rate doubles. The algorithm and its theoretical properties are verified in three different test cases: analytical cases that confirm the correctness of the theoretical findings, Burgers' equation to show its applicability in implicit problems, and finally the calibration of the closure parameters of a turbulence model to show the effectiveness for computationally expensive problems.