Volume 53, Issue 2
PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units

Bo Li, Shanshan Tang & Haijun Yu

J. Math. Study, 53 (2020), pp. 159-191.

Published online: 2020-05

Export citation
  • Abstract

Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: PowerNets with $n$ hidden layers can exactly represent polynomials up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.

  • AMS Subject Headings

65M12, 65M15, 65P40

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address

libo1171309228@lsec.cc.ac.cn (Bo Li)

tangshanshan@lsec.cc.ac.cn (Shanshan Tang)

hyu@lsec.cc.ac.cn (Haijun Yu)

  • BibTex
  • RIS
  • TXT
@Article{JMS-53-159, author = {Li , BoTang , Shanshan and Yu , Haijun}, title = {PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units}, journal = {Journal of Mathematical Study}, year = {2020}, volume = {53}, number = {2}, pages = {159--191}, abstract = {

Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: PowerNets with $n$ hidden layers can exactly represent polynomials up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.

}, issn = {2617-8702}, doi = {https://doi.org/10.4208/jms.v53n2.20.03}, url = {http://global-sci.org/intro/article_detail/jms/16803.html} }
TY - JOUR T1 - PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units AU - Li , Bo AU - Tang , Shanshan AU - Yu , Haijun JO - Journal of Mathematical Study VL - 2 SP - 159 EP - 191 PY - 2020 DA - 2020/05 SN - 53 DO - http://doi.org/10.4208/jms.v53n2.20.03 UR - https://global-sci.org/intro/article_detail/jms/16803.html KW - Deep neural network, rectified linear unit, rectified power unit, sparse grid, PowerNet. AB -

Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: PowerNets with $n$ hidden layers can exactly represent polynomials up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.

Li , BoTang , Shanshan and Yu , Haijun. (2020). PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units. Journal of Mathematical Study. 53 (2). 159-191. doi:10.4208/jms.v53n2.20.03
Copy to clipboard
The citation has been copied to your clipboard