Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection

Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection

Year:    2019

Author:    Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin

Journal of Computational Mathematics, Vol. 37 (2019), Iss. 3 : pp. 349–359

Abstract

We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs). Specifically, we quantize the weights to zero or powers of 2 by minimizing the Euclidean distance between full-precision weights and quantized weights during backpropagation (weight learning). We characterize the combinatorial nature of the low bit-width quantization problem. For 2-bit (ternary) CNNs, the quantization of $N$ weights can be done by an exact formula in $O$($N$ log $N$) complexity. When the bit-width is 3 and above, we further propose a semi-analytical thresholding scheme with a single free parameter for quantization that is computationally inexpensive. The free parameter is further determined by network retraining and object detection tests. The LBW-Net has several desirable advantages over full-precision CNNs, including considerable memory savings, energy efficiency, and faster deployment. Our experiments on PASCAL VOC dataset [5] show that compared with its 32-bit floating-point counterpart, the performance of the 6-bit LBW-Net is nearly lossless in the object detection tasks, and can even do better in real world visual scenes, while empirically enjoying more than 4× faster deployment.

You do not have full access to this article.

Already a Subscriber? Sign in as an individual or via your institution

Journal Article Details

Publisher Name:    Global Science Press

Language:    English

DOI:    https://doi.org/10.4208/jcm.1803-m2017-0301

Journal of Computational Mathematics, Vol. 37 (2019), Iss. 3 : pp. 349–359

Published online:    2019-01

AMS Subject Headings:   

Copyright:    COPYRIGHT: © Global Science Press

Pages:    11

Keywords:    Quantization Low bit width deep neural networks Exact and approximate analytical formulas Network training Object detection.

Author Details

Penghang Yin

Shuai Zhang

Yingyong Qi

Jack Xin

  1. Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Network

    Dinh, Thu | Xin, Jack

    Frontiers in Applied Mathematics and Statistics, Vol. 6 (2020), Iss.

    https://doi.org/10.3389/fams.2020.00013 [Citations: 1]
  2. Sparsity Regularization-Based Real-Time Target Recognition for Side Scan Sonar with Embedded GPU

    Li, Zhuoyi | Chen, Deshan | Yip, Tsz Leung | Zhang, Jinfen

    Journal of Marine Science and Engineering, Vol. 11 (2023), Iss. 3 P.487

    https://doi.org/10.3390/jmse11030487 [Citations: 4]
  3. Learning quantized neural nets by coarse gradient method for nonlinear classification

    Long, Ziang | Yin, Penghang | Xin, Jack

    Research in the Mathematical Sciences, Vol. 8 (2021), Iss. 3

    https://doi.org/10.1007/s40687-021-00281-4 [Citations: 0]
  4. Binarized Weight Neural-Network Inspired Ultra-Low Power Speech Recognition Processor with Time-Domain Based Digital-Analog Mixed Approximate Computing

    Liu, Bo | Cai, Hao | Gong, Yu | Zhu, Wentao | Li, Yan | Ge, Wei | Wang, Zhen

    2020 IEEE International Symposium on Circuits and Systems (ISCAS), (2020), P.1

    https://doi.org/10.1109/ISCAS45731.2020.9181172 [Citations: 2]
  5. Gradient Descent Learning With Floats

    Sun, Tao | Tang, Ke | Li, Dongsheng

    IEEE Transactions on Cybernetics, Vol. 52 (2022), Iss. 3 P.1763

    https://doi.org/10.1109/TCYB.2020.2997399 [Citations: 12]
  6. Binary Quantized Network Training With Sharpness-Aware Minimization

    Liu, Ren | Bian, Fengmiao | Zhang, Xiaoqun

    Journal of Scientific Computing, Vol. 94 (2023), Iss. 1

    https://doi.org/10.1007/s10915-022-02064-7 [Citations: 3]
  7. Sparse loss-aware ternarization for neural networks

    Zhou, Ruizhi | Niu, Lingfeng | Xu, Dachuan

    Information Sciences, Vol. 693 (2025), Iss. P.121668

    https://doi.org/10.1016/j.ins.2024.121668 [Citations: 0]
  8. An Ultra-Low Power Always-On Keyword Spotting Accelerator Using Quantized Convolutional Neural Network and Voltage-Domain Analog Switching Network-Based Approximate Computing

    Liu, Bo | Wang, Zhen | Zhu, Wentao | Sun, Yuhao | Shen, Zeyu | Huang, Lepeng | Li, Yan | Gong, Yu | Ge, Wei

    IEEE Access, Vol. 7 (2019), Iss. P.186456

    https://doi.org/10.1109/ACCESS.2019.2960948 [Citations: 20]
  9. DAC: Data-Free Automatic Acceleration of Convolutional Networks

    Li, Xin | Zhang, Shuai | Jiang, Bolan | Qi, Yingyong | Chuah, Mooi Choo | Bi, Ning

    2019 IEEE Winter Conference on Applications of Computer Vision (WACV), (2019), P.1598

    https://doi.org/10.1109/WACV.2019.00175 [Citations: 4]
  10. Advances in Information and Communication

    Training with Reduced Precision of a Support Vector Machine Model for Text Classification

    Żurek, Dominik | Pietroń, Marcin | Wiatr, Kazimierz

    2021

    https://doi.org/10.1007/978-3-030-73103-8_56 [Citations: 0]