- Journal Home
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 34 (2023), pp. 813-836.
Published online: 2023-10
Cited by
- BibTex
- RIS
- TXT
In this paper, we give the first rigorous error estimation of the Weak Adversarial Neural Networks (WAN) in solving the second order parabolic PDEs. By decomposing the error into approximation error and statistical error, we first show the weak solution can be approximated by the $ReLU^2$ with arbitrary accuracy, then prove that the statistical error can also be efficiently bounded by the Rademacher complexity of the network functions, which can be further bounded by some integral related with the covering numbers and pseudo-dimension of $ReLU^2$ space. Finally, by combining the two bounds, we prove that the error of the WAN method can be well controlled if the depth and width of the neural network as well as the sample numbers have been properly selected. Our result also reveals some kind of freedom in choosing sample numbers on $∂Ω$ and in the time axis.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2023-0063}, url = {http://global-sci.org/intro/article_detail/cicp/22025.html} }In this paper, we give the first rigorous error estimation of the Weak Adversarial Neural Networks (WAN) in solving the second order parabolic PDEs. By decomposing the error into approximation error and statistical error, we first show the weak solution can be approximated by the $ReLU^2$ with arbitrary accuracy, then prove that the statistical error can also be efficiently bounded by the Rademacher complexity of the network functions, which can be further bounded by some integral related with the covering numbers and pseudo-dimension of $ReLU^2$ space. Finally, by combining the two bounds, we prove that the error of the WAN method can be well controlled if the depth and width of the neural network as well as the sample numbers have been properly selected. Our result also reveals some kind of freedom in choosing sample numbers on $∂Ω$ and in the time axis.