- Journal Home
- Volume 42 - 2024
- Volume 41 - 2023
- Volume 40 - 2022
- Volume 39 - 2021
- Volume 38 - 2020
- Volume 37 - 2019
- Volume 36 - 2018
- Volume 35 - 2017
- Volume 34 - 2016
- Volume 33 - 2015
- Volume 32 - 2014
- Volume 31 - 2013
- Volume 30 - 2012
- Volume 29 - 2011
- Volume 28 - 2010
- Volume 27 - 2009
- Volume 26 - 2008
- Volume 25 - 2007
- Volume 24 - 2006
- Volume 23 - 2005
- Volume 22 - 2004
- Volume 21 - 2003
- Volume 20 - 2002
- Volume 19 - 2001
- Volume 18 - 2000
- Volume 17 - 1999
- Volume 16 - 1998
- Volume 15 - 1997
- Volume 14 - 1996
- Volume 13 - 1995
- Volume 12 - 1994
- Volume 11 - 1993
- Volume 10 - 1992
- Volume 9 - 1991
- Volume 8 - 1990
- Volume 7 - 1989
- Volume 6 - 1988
- Volume 5 - 1987
- Volume 4 - 1986
- Volume 3 - 1985
- Volume 2 - 1984
- Volume 1 - 1983
Cited by
- BibTex
- RIS
- TXT
We treat infinite horizon optimal control problems by solving the associated stationary Bellman equation numerically to compute the value function and an optimal feedback law. The dynamical systems under consideration are spatial discretizations of non linear parabolic partial differential equations (PDE), which means that the Bellman equation suffers from the curse of dimensionality. Its non linearity is handled by the Policy Iteration algorithm, where the problem is reduced to a sequence of linear equations, which remain the computational bottleneck due to their high dimensions. We reformulate the linearized Bellman equations via the Koopman operator into an operator equation, that is solved using a minimal residual method. Using the Koopman operator we identify a preconditioner for operator equation, which deems essential in our numerical tests. To overcome computational infeasibility we use low rank hierarchical tensor product approximation/tree-based tensor formats, in particular tensor trains (TT tensors) and multi-polynomials, together with high-dimensional quadrature, e.g. Monte-Carlo. By controlling a destabilized version of viscous Burgers and a diffusion equation with unstable reaction term numerical evidence is given.
}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.2112-m2021-0084}, url = {http://global-sci.org/intro/article_detail/jcm/23030.html} }We treat infinite horizon optimal control problems by solving the associated stationary Bellman equation numerically to compute the value function and an optimal feedback law. The dynamical systems under consideration are spatial discretizations of non linear parabolic partial differential equations (PDE), which means that the Bellman equation suffers from the curse of dimensionality. Its non linearity is handled by the Policy Iteration algorithm, where the problem is reduced to a sequence of linear equations, which remain the computational bottleneck due to their high dimensions. We reformulate the linearized Bellman equations via the Koopman operator into an operator equation, that is solved using a minimal residual method. Using the Koopman operator we identify a preconditioner for operator equation, which deems essential in our numerical tests. To overcome computational infeasibility we use low rank hierarchical tensor product approximation/tree-based tensor formats, in particular tensor trains (TT tensors) and multi-polynomials, together with high-dimensional quadrature, e.g. Monte-Carlo. By controlling a destabilized version of viscous Burgers and a diffusion equation with unstable reaction term numerical evidence is given.