Year: 2019
Author: Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin
Journal of Computational Mathematics, Vol. 37 (2019), Iss. 3 : pp. 349–359
Abstract
We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs). Specifically, we quantize the weights to zero or powers of 2 by minimizing the Euclidean distance between full-precision weights and quantized weights during backpropagation (weight learning). We characterize the combinatorial nature of the low bit-width quantization problem. For 2-bit (ternary) CNNs, the quantization of $N$ weights can be done by an exact formula in $O$($N$ log $N$) complexity. When the bit-width is 3 and above, we further propose a semi-analytical thresholding scheme with a single free parameter for quantization that is computationally inexpensive. The free parameter is further determined by network retraining and object detection tests. The LBW-Net has several desirable advantages over full-precision CNNs, including considerable memory savings, energy efficiency, and faster deployment. Our experiments on PASCAL VOC dataset [5] show that compared with its 32-bit floating-point counterpart, the performance of the 6-bit LBW-Net is nearly lossless in the object detection tasks, and can even do better in real world visual scenes, while empirically enjoying more than 4× faster deployment.
You do not have full access to this article.
Already a Subscriber? Sign in as an individual or via your institution
Journal Article Details
Publisher Name: Global Science Press
Language: English
DOI: https://doi.org/10.4208/jcm.1803-m2017-0301
Journal of Computational Mathematics, Vol. 37 (2019), Iss. 3 : pp. 349–359
Published online: 2019-01
AMS Subject Headings:
Copyright: COPYRIGHT: © Global Science Press
Pages: 11
Keywords: Quantization Low bit width deep neural networks Exact and approximate analytical formulas Network training Object detection.