arrow
Volume 37, Issue 3
Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection

Penghang Yin, Shuai Zhang, Yingyong Qi & Jack Xin

J. Comp. Math., 37 (2019), pp. 349-359.

Published online: 2018-09

Export citation
  • Abstract

We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs). Specifically, we quantize the weights to zero or powers of 2 by minimizing the Euclidean distance between full-precision weights and quantized weights during backpropagation (weight learning). We characterize the combinatorial nature of the low bit-width quantization problem. For 2-bit (ternary) CNNs, the quantization of $N$ weights can be done by an exact formula in $O$($N$ log $N$) complexity. When the bit-width is 3 and above, we further propose a semi-analytical thresholding scheme with a single free parameter for quantization that is computationally inexpensive. The free parameter is further determined by network retraining and object detection tests. The LBW-Net has several desirable advantages over full-precision CNNs, including considerable memory savings, energy efficiency, and faster deployment. Our experiments on PASCAL VOC dataset [5] show that compared with its 32-bit floating-point counterpart, the performance of the 6-bit LBW-Net is nearly lossless in the object detection tasks, and can even do better in real world visual scenes, while empirically enjoying more than 4× faster deployment.

  • AMS Subject Headings

90C26, 90C10, 90C90.

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address

yph@ucla.edu (Penghang Yin)

szhang3@uci.edu (Shuai Zhang)

yqi@uci.edu (Yingyong Qi)

jxin@math.uci.edu (Jack Xin)

  • BibTex
  • RIS
  • TXT
@Article{JCM-37-349, author = {Yin , PenghangZhang , ShuaiQi , Yingyong and Xin , Jack}, title = {Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection}, journal = {Journal of Computational Mathematics}, year = {2018}, volume = {37}, number = {3}, pages = {349--359}, abstract = {

We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs). Specifically, we quantize the weights to zero or powers of 2 by minimizing the Euclidean distance between full-precision weights and quantized weights during backpropagation (weight learning). We characterize the combinatorial nature of the low bit-width quantization problem. For 2-bit (ternary) CNNs, the quantization of $N$ weights can be done by an exact formula in $O$($N$ log $N$) complexity. When the bit-width is 3 and above, we further propose a semi-analytical thresholding scheme with a single free parameter for quantization that is computationally inexpensive. The free parameter is further determined by network retraining and object detection tests. The LBW-Net has several desirable advantages over full-precision CNNs, including considerable memory savings, energy efficiency, and faster deployment. Our experiments on PASCAL VOC dataset [5] show that compared with its 32-bit floating-point counterpart, the performance of the 6-bit LBW-Net is nearly lossless in the object detection tasks, and can even do better in real world visual scenes, while empirically enjoying more than 4× faster deployment.

}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.1803-m2017-0301}, url = {http://global-sci.org/intro/article_detail/jcm/12726.html} }
TY - JOUR T1 - Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection AU - Yin , Penghang AU - Zhang , Shuai AU - Qi , Yingyong AU - Xin , Jack JO - Journal of Computational Mathematics VL - 3 SP - 349 EP - 359 PY - 2018 DA - 2018/09 SN - 37 DO - http://doi.org/10.4208/jcm.1803-m2017-0301 UR - https://global-sci.org/intro/article_detail/jcm/12726.html KW - Quantization, Low bit width deep neural networks, Exact and approximate analytical formulas, Network training, Object detection. AB -

We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs). Specifically, we quantize the weights to zero or powers of 2 by minimizing the Euclidean distance between full-precision weights and quantized weights during backpropagation (weight learning). We characterize the combinatorial nature of the low bit-width quantization problem. For 2-bit (ternary) CNNs, the quantization of $N$ weights can be done by an exact formula in $O$($N$ log $N$) complexity. When the bit-width is 3 and above, we further propose a semi-analytical thresholding scheme with a single free parameter for quantization that is computationally inexpensive. The free parameter is further determined by network retraining and object detection tests. The LBW-Net has several desirable advantages over full-precision CNNs, including considerable memory savings, energy efficiency, and faster deployment. Our experiments on PASCAL VOC dataset [5] show that compared with its 32-bit floating-point counterpart, the performance of the 6-bit LBW-Net is nearly lossless in the object detection tasks, and can even do better in real world visual scenes, while empirically enjoying more than 4× faster deployment.

Yin , PenghangZhang , ShuaiQi , Yingyong and Xin , Jack. (2018). Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection. Journal of Computational Mathematics. 37 (3). 349-359. doi:10.4208/jcm.1803-m2017-0301
Copy to clipboard
The citation has been copied to your clipboard