- Journal Home
- Volume 37 - 2025
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 37 (2025), pp. 575-602.
Published online: 2025-02
Cited by
- BibTex
- RIS
- TXT
Dynamic mode decomposition (DMD), as a data-driven method, has been frequently used to construct reduced-order models (ROMs) due to its good performance in time extrapolation. However, existing DMD-based ROMs suffer from high storage and computational costs for high-dimensional problems. To mitigate this problem, we develop a new DMD-based ROM, i.e., TDMD-GPR, by combining tensor train decomposition (TTD) and Gaussian process regression (GPR), where TTD is used to decompose the high-dimensional tensor into multiple factors, including parameter-dependent and time-dependent factors. Parameter-dependent factor is fed into GPR to build the map between parameter value and factor vector. For any parameter value, multiplying the corresponding parameter-dependent factor vector and the time-dependent factor matrix, the result describes the temporal behavior of the spatial basis for this parameter value and is then used to train the DMD model. In addition, incremental singular value decomposition is adopted to acquire a collection of important instants, which can further reduce the computational and storage costs of TDMD-GPR. The comparison TDMD and standard DMD in terms of computational and storage complexities shows that TDMD is more advantageous. The performance of the TDMD and TDMD-GPR is assessed through several cases, and the numerical results confirm the effectiveness of them.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2023-0135}, url = {http://global-sci.org/intro/article_detail/cicp/23874.html} }Dynamic mode decomposition (DMD), as a data-driven method, has been frequently used to construct reduced-order models (ROMs) due to its good performance in time extrapolation. However, existing DMD-based ROMs suffer from high storage and computational costs for high-dimensional problems. To mitigate this problem, we develop a new DMD-based ROM, i.e., TDMD-GPR, by combining tensor train decomposition (TTD) and Gaussian process regression (GPR), where TTD is used to decompose the high-dimensional tensor into multiple factors, including parameter-dependent and time-dependent factors. Parameter-dependent factor is fed into GPR to build the map between parameter value and factor vector. For any parameter value, multiplying the corresponding parameter-dependent factor vector and the time-dependent factor matrix, the result describes the temporal behavior of the spatial basis for this parameter value and is then used to train the DMD model. In addition, incremental singular value decomposition is adopted to acquire a collection of important instants, which can further reduce the computational and storage costs of TDMD-GPR. The comparison TDMD and standard DMD in terms of computational and storage complexities shows that TDMD is more advantageous. The performance of the TDMD and TDMD-GPR is assessed through several cases, and the numerical results confirm the effectiveness of them.