Volume 1, Issue 4
Approximation of Functionals by Neural Network Without Curse of Dimensionality

Yahong Yang & Yang Xiang

J. Mach. Learn. , 1 (2022), pp. 342-372.

Published online: 2022-12

Category: Theory

[An open-access article; the PDF is free to any online user.]

Export citation
  • Abstract

In this paper, we establish a neural network to approximate functionals, which are maps from infinite dimensional spaces to finite dimensional spaces. The approximation error of the neural network is $\mathcal{O}(1/\sqrt{m})$ where $m$ is the size of networks. In other words, the error of the network is no dependence on the dimensionality respecting to the number of the nodes in neural networks. The key idea of the approximation is to define a Barron space of functionals.

  • General Summary

Learning functionals or operators by neural networks is nowadays widely used in computational and applied mathematics. Compared with learning functions by neural networks, an essential difference is that the input spaces of functionals or operators are infinite dimensional space. Some recent works learnt functionals or operators by reducing the input space into a finite dimensional space. However, the curse of dimensionality always exists in this type of methods. That is, in order to maintain the accuracy of an approximation, the number of sample points grows exponentially with the increase of dimension. 

In this paper, we establish a new method for the approximation of functionals by neural networks without curse of dimensionality. Functionals, such as linear functionals and energy functionals, have a wide range of important applications in science and engineering fields. We define Fourier series of functionals and the associated Barron spectral space of functionals, based on which our new neural network approximation method is established. The parameters and the network structure in our method only depend on the functional. The approximation error of the neural network is $O(1/\sqrt{m})$ where $m$ is the size of the network, which does not depend on the dimensionality.

  • AMS Subject Headings

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{JML-1-342, author = {Yang , Yahong and Xiang , Yang}, title = {Approximation of Functionals by Neural Network Without Curse of Dimensionality}, journal = {Journal of Machine Learning}, year = {2022}, volume = {1}, number = {4}, pages = {342--372}, abstract = {

In this paper, we establish a neural network to approximate functionals, which are maps from infinite dimensional spaces to finite dimensional spaces. The approximation error of the neural network is $\mathcal{O}(1/\sqrt{m})$ where $m$ is the size of networks. In other words, the error of the network is no dependence on the dimensionality respecting to the number of the nodes in neural networks. The key idea of the approximation is to define a Barron space of functionals.

}, issn = {2790-2048}, doi = {https://doi.org/10.4208/jml.221018}, url = {http://global-sci.org/intro/article_detail/jml/21297.html} }
TY - JOUR T1 - Approximation of Functionals by Neural Network Without Curse of Dimensionality AU - Yang , Yahong AU - Xiang , Yang JO - Journal of Machine Learning VL - 4 SP - 342 EP - 372 PY - 2022 DA - 2022/12 SN - 1 DO - http://doi.org/10.4208/jml.221018 UR - https://global-sci.org/intro/article_detail/jml/21297.html KW - Functionals, Neural networks, Infinite dimensional spaces, Barron spectral space, Fourier series. AB -

In this paper, we establish a neural network to approximate functionals, which are maps from infinite dimensional spaces to finite dimensional spaces. The approximation error of the neural network is $\mathcal{O}(1/\sqrt{m})$ where $m$ is the size of networks. In other words, the error of the network is no dependence on the dimensionality respecting to the number of the nodes in neural networks. The key idea of the approximation is to define a Barron space of functionals.

Yang , Yahong and Xiang , Yang. (2022). Approximation of Functionals by Neural Network Without Curse of Dimensionality. Journal of Machine Learning. 1 (4). 342-372. doi:10.4208/jml.221018
Copy to clipboard
The citation has been copied to your clipboard