首页> 外文会议>Chinese conference on pattern recognition and computer vision >Training Low Bitwidth Model with Weight Normalization for Convolutional Neural Networks
【24h】

Training Low Bitwidth Model with Weight Normalization for Convolutional Neural Networks

机译:卷积神经网络通过权重归一化训练低位宽模型

获取原文

摘要

Convolutional Neural Networks (CNNs) is now widely utilized in computer vision applications, including image classification, object detection and segmentation. However, high memory complexity and computation intensive have limited the deployment on low power embedded devices. We propose a method to train convolutional neural networks with low bitwidth by performing weight normalization. By normalization, the distribution of the weight can be narrowed, which enables the low bitwidth network to achieve a good trade-off between range and precision. Moreover, adding a scaling factor to the weight solves the problem of inadequate expressiveness at low bits, which further improves the performance of classification. The experiments on various datasets show that our method can achieve comparable prediction accuracy as that of full-precision models. To emphasize, the proposed scheme can quantize the network of AlexNet to 3-bit fixed point on ImageNet, and the accuracy of top-1 drop only by 1%.
机译:卷积神经网络(CNN)现在已广泛用于计算机视觉应用,包括图像分类,对象检测和分割。但是,高内存复杂性和计算密集型限制了在低功耗嵌入式设备上的部署。我们提出了一种通过执行权重归一化训练具有低位宽的卷积神经网络的方法。通过归一化,可以缩小权重的分布,这使低位宽的网络可以在范围和精度之间取得良好的折衷。此外,在权重上添加比例因子可以解决低位表示能力不足的问题,从而进一步提高了分类性能。在各种数据集上的实验表明,我们的方法可以实现与全精度模型相当的预测精度。要强调的是,该方案可以将AlexNet的网络量化为ImageNet上的3位固定点,并且top-1的准确性仅下降1%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号