- Journal Home
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 32 (2022), pp. 1007-1038.
Published online: 2022-10
Cited by
- BibTex
- RIS
- TXT
In this work, we study gradient-based regularization methods for neural networks. We mainly focus on two regularization methods: the total variation and the Tikhonov regularization. Adding the regularization term to the training loss is equivalent to using neural networks to solve some variational problems, mostly in high dimensions in practical applications. We introduce a general framework to analyze the error between neural network solutions and true solutions to variational problems. The error consists of three parts: the approximation errors of neural networks, the quadrature errors of numerical integration, and the optimization error. We also apply the proposed framework to two-layer networks to derive a priori error estimate when the true solution belongs to the so-called Barron space. Moreover, we conduct some numerical experiments to show that neural networks can solve corresponding variational problems sufficiently well. The networks with gradient-based regularization are much more robust in image applications.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2021-0211}, url = {http://global-sci.org/intro/article_detail/cicp/21137.html} }In this work, we study gradient-based regularization methods for neural networks. We mainly focus on two regularization methods: the total variation and the Tikhonov regularization. Adding the regularization term to the training loss is equivalent to using neural networks to solve some variational problems, mostly in high dimensions in practical applications. We introduce a general framework to analyze the error between neural network solutions and true solutions to variational problems. The error consists of three parts: the approximation errors of neural networks, the quadrature errors of numerical integration, and the optimization error. We also apply the proposed framework to two-layer networks to derive a priori error estimate when the true solution belongs to the so-called Barron space. Moreover, we conduct some numerical experiments to show that neural networks can solve corresponding variational problems sufficiently well. The networks with gradient-based regularization are much more robust in image applications.