- Journal Home
- Volume 18 - 2025
- Volume 17 - 2024
- Volume 16 - 2023
- Volume 15 - 2022
- Volume 14 - 2021
- Volume 13 - 2020
- Volume 12 - 2019
- Volume 11 - 2018
- Volume 10 - 2017
- Volume 9 - 2016
- Volume 8 - 2015
- Volume 7 - 2014
- Volume 6 - 2013
- Volume 5 - 2012
- Volume 4 - 2011
- Volume 3 - 2010
- Volume 2 - 2009
- Volume 1 - 2008
Numer. Math. Theor. Meth. Appl., 3 (2010), pp. 79-96.
Published online: 2010-03
Cited by
- BibTex
- RIS
- TXT
The state equations of stochastic control problems, which are controlled stochastic differential equations, are proposed to be discretized by the weak midpoint rule and predictor-corrector methods for the Markov chain approximation approach. Local consistency of the methods are proved. Numerical tests on a simplified Merton's portfolio model show better simulation to feedback control rules by these two methods, as compared with the weak Euler-Maruyama discretisation used by Krawczyk. This suggests a new approach of improving accuracy of approximating Markov chains for stochastic control problems.
}, issn = {2079-7338}, doi = {https://doi.org/10.4208/nmtma.2009.m99006}, url = {http://global-sci.org/intro/article_detail/nmtma/5990.html} }The state equations of stochastic control problems, which are controlled stochastic differential equations, are proposed to be discretized by the weak midpoint rule and predictor-corrector methods for the Markov chain approximation approach. Local consistency of the methods are proved. Numerical tests on a simplified Merton's portfolio model show better simulation to feedback control rules by these two methods, as compared with the weak Euler-Maruyama discretisation used by Krawczyk. This suggests a new approach of improving accuracy of approximating Markov chains for stochastic control problems.