arrow
Volume 16, Issue 3
Convergence Analysis of a Quasi-Monte Carlo-Based Deep Learning Algorithm for Solving Partial Differential Equations

Fengjiang Fu & Xiaoqun Wang

Numer. Math. Theor. Meth. Appl., 16 (2023), pp. 668-700.

Published online: 2023-08

Export citation
  • Abstract

Deep learning has achieved great success in solving partial differential equations (PDEs), where the loss is often defined as an integral. The accuracy and efficiency of these algorithms depend greatly on the quadrature method. We propose to apply quasi-Monte Carlo (QMC) methods to the Deep Ritz Method (DRM) for solving the Neumann problems for the Poisson equation and the static Schrödinger equation. For error estimation, we decompose the error of using the deep learning algorithm to solve PDEs into the generalization error, the approximation error and the training error. We establish the upper bounds and prove that QMC-based DRM achieves an asymptotically smaller error bound than DRM. Numerical experiments show that the proposed method converges faster in all cases and the variances of the gradient estimators of randomized QMC-based DRM are much smaller than those of DRM, which illustrates the superiority of QMC in deep learning over MC.

  • AMS Subject Headings

35J20, 35Q68, 65D30, 65N15, 68T07

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{NMTMA-16-668, author = {Fu , Fengjiang and Wang , Xiaoqun}, title = {Convergence Analysis of a Quasi-Monte Carlo-Based Deep Learning Algorithm for Solving Partial Differential Equations}, journal = {Numerical Mathematics: Theory, Methods and Applications}, year = {2023}, volume = {16}, number = {3}, pages = {668--700}, abstract = {

Deep learning has achieved great success in solving partial differential equations (PDEs), where the loss is often defined as an integral. The accuracy and efficiency of these algorithms depend greatly on the quadrature method. We propose to apply quasi-Monte Carlo (QMC) methods to the Deep Ritz Method (DRM) for solving the Neumann problems for the Poisson equation and the static Schrödinger equation. For error estimation, we decompose the error of using the deep learning algorithm to solve PDEs into the generalization error, the approximation error and the training error. We establish the upper bounds and prove that QMC-based DRM achieves an asymptotically smaller error bound than DRM. Numerical experiments show that the proposed method converges faster in all cases and the variances of the gradient estimators of randomized QMC-based DRM are much smaller than those of DRM, which illustrates the superiority of QMC in deep learning over MC.

}, issn = {2079-7338}, doi = {https://doi.org/10.4208/nmtma.OA-2022-0166}, url = {http://global-sci.org/intro/article_detail/nmtma/21962.html} }
TY - JOUR T1 - Convergence Analysis of a Quasi-Monte Carlo-Based Deep Learning Algorithm for Solving Partial Differential Equations AU - Fu , Fengjiang AU - Wang , Xiaoqun JO - Numerical Mathematics: Theory, Methods and Applications VL - 3 SP - 668 EP - 700 PY - 2023 DA - 2023/08 SN - 16 DO - http://doi.org/10.4208/nmtma.OA-2022-0166 UR - https://global-sci.org/intro/article_detail/nmtma/21962.html KW - Deep Ritz method, quasi-Monte Carlo, Poisson equation, static Schrödinger equation, error bound. AB -

Deep learning has achieved great success in solving partial differential equations (PDEs), where the loss is often defined as an integral. The accuracy and efficiency of these algorithms depend greatly on the quadrature method. We propose to apply quasi-Monte Carlo (QMC) methods to the Deep Ritz Method (DRM) for solving the Neumann problems for the Poisson equation and the static Schrödinger equation. For error estimation, we decompose the error of using the deep learning algorithm to solve PDEs into the generalization error, the approximation error and the training error. We establish the upper bounds and prove that QMC-based DRM achieves an asymptotically smaller error bound than DRM. Numerical experiments show that the proposed method converges faster in all cases and the variances of the gradient estimators of randomized QMC-based DRM are much smaller than those of DRM, which illustrates the superiority of QMC in deep learning over MC.

Fu , Fengjiang and Wang , Xiaoqun. (2023). Convergence Analysis of a Quasi-Monte Carlo-Based Deep Learning Algorithm for Solving Partial Differential Equations. Numerical Mathematics: Theory, Methods and Applications. 16 (3). 668-700. doi:10.4208/nmtma.OA-2022-0166
Copy to clipboard
The citation has been copied to your clipboard