Deep neural network with rectified linear units (ReLU) is getting more and
more popular recently. However, the derivatives of the function represented by a ReLU
network are not continuous, which limit the usage of ReLU network to situations only
when smoothness is not required. In this paper, we construct deep neural networks
with rectified power units (RePU), which can give better approximations for smooth
functions. Optimal algorithms are proposed to explicitly build neural networks with
sparsely connected RePUs, which we call PowerNets, to represent polynomials with
no approximation error. For general smooth functions, we first project the function to
their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an
upper bound of the best RePU network approximation error. For smooth functions in
higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product
grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep
neural networks: PowerNets with $n$ hidden layers can exactly represent polynomials
up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is
required.