- Journal Home
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 28 (2020), pp. 1707-1745.
Published online: 2020-11
Cited by
- BibTex
- RIS
- TXT
We study a family of $H^m$-conforming piecewise polynomials based on the artificial neural network, referred to as the finite neuron method (FNM), for numerical solution of $2m$-th-order partial differential equations in $\mathbb{R}^d$ for any $m,d≥1$ and then provide convergence analysis for this method. Given a general domain Ω$⊂\mathbb{R}^d$ and a partition $\mathcal{T}_h$ of Ω, it is still an open problem in general how to construct a conforming finite element subspace of $H^m$(Ω) that has adequate approximation properties. By using techniques from artificial neural networks, we construct a family of $H^m$-conforming functions consisting of piecewise polynomials of degree $k$ for any $k≥m$ and we further obtain the error estimate when they are applied to solve the elliptic boundary value problem of any order in any dimension. For example, the error estimates that $‖u−u_N‖_{H^m(\rm{Ω})}=\mathcal{O}(N^{−\frac{1}{2}−\frac{1}{d}})$ is obtained for the error between the exact solution $u$ and the finite neuron approximation $u_N$. A discussion is also provided on the difference and relationship between the finite neuron method and finite element methods (FEM). For example, for the finite neuron method, the underlying finite element grids are not given a priori and the discrete solution can be obtained by only solving a non-linear and non-convex optimization problem. Despite the many desirable theoretical properties of the finite neuron method analyzed in the paper, its practical value requires further investigation as the aforementioned underlying non-linear and non-convex optimization problem can be expensive and challenging to solve. For completeness and the convenience of the reader, some basic known results and their proofs are introduced.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2020-0191}, url = {http://global-sci.org/intro/article_detail/cicp/18394.html} }We study a family of $H^m$-conforming piecewise polynomials based on the artificial neural network, referred to as the finite neuron method (FNM), for numerical solution of $2m$-th-order partial differential equations in $\mathbb{R}^d$ for any $m,d≥1$ and then provide convergence analysis for this method. Given a general domain Ω$⊂\mathbb{R}^d$ and a partition $\mathcal{T}_h$ of Ω, it is still an open problem in general how to construct a conforming finite element subspace of $H^m$(Ω) that has adequate approximation properties. By using techniques from artificial neural networks, we construct a family of $H^m$-conforming functions consisting of piecewise polynomials of degree $k$ for any $k≥m$ and we further obtain the error estimate when they are applied to solve the elliptic boundary value problem of any order in any dimension. For example, the error estimates that $‖u−u_N‖_{H^m(\rm{Ω})}=\mathcal{O}(N^{−\frac{1}{2}−\frac{1}{d}})$ is obtained for the error between the exact solution $u$ and the finite neuron approximation $u_N$. A discussion is also provided on the difference and relationship between the finite neuron method and finite element methods (FEM). For example, for the finite neuron method, the underlying finite element grids are not given a priori and the discrete solution can be obtained by only solving a non-linear and non-convex optimization problem. Despite the many desirable theoretical properties of the finite neuron method analyzed in the paper, its practical value requires further investigation as the aforementioned underlying non-linear and non-convex optimization problem can be expensive and challenging to solve. For completeness and the convenience of the reader, some basic known results and their proofs are introduced.