CSIAM Trans. Appl. Math., 1 (2020), pp. 518-529.
Published online: 2020-09
Cited by
- BibTex
- RIS
- TXT
In most convolution neural networks (CNNs), downsampling hidden layers is adopted for increasing computation efficiency and the receptive field size. Such operation is commonly called pooling. Maximization and averaging over sliding windows ($max/average$ $pooling$), and plain downsampling in the form of strided convolution are popular pooling methods. Since the pooling is a lossy procedure, a motivation of our work is to design a new pooling approach for less lossy in the dimensionality reduction. Inspired by the spectral pooling proposed by Rippel et al. [1], we present the Hartley transform based spectral pooling method. The proposed spectral pooling avoids the use of complex arithmetic for frequency representation, in comparison with Fourier pooling. The new approach preserves more structure features for network's discriminability than max and average pooling. We empirically show the Hartley pooling gives rise to the convergence of training CNNs on MNIST and CIFAR-10 datasets.
}, issn = {2708-0579}, doi = {https://doi.org/10.4208/csiam-am.2020-0018}, url = {http://global-sci.org/intro/article_detail/csiam-am/18306.html} }In most convolution neural networks (CNNs), downsampling hidden layers is adopted for increasing computation efficiency and the receptive field size. Such operation is commonly called pooling. Maximization and averaging over sliding windows ($max/average$ $pooling$), and plain downsampling in the form of strided convolution are popular pooling methods. Since the pooling is a lossy procedure, a motivation of our work is to design a new pooling approach for less lossy in the dimensionality reduction. Inspired by the spectral pooling proposed by Rippel et al. [1], we present the Hartley transform based spectral pooling method. The proposed spectral pooling avoids the use of complex arithmetic for frequency representation, in comparison with Fourier pooling. The new approach preserves more structure features for network's discriminability than max and average pooling. We empirically show the Hartley pooling gives rise to the convergence of training CNNs on MNIST and CIFAR-10 datasets.