首页> 外文期刊>Journal of intelligent & fuzzy systems: Applications in Engineering and Technology >A CNN channel pruning low-bit framework using weight quantization with sparse group lasso regularization
【24h】

A CNN channel pruning low-bit framework using weight quantization with sparse group lasso regularization

机译:使用具有稀疏组套索正则化的权重量化的CNN信道修剪低位框架

获取原文
获取原文并翻译 | 示例
           

摘要

The deployment of large-scale Convolutional Neural Networks (CNNs) in limited-power devices is hindered by their high computation cost and storage. In this paper, we propose a novel framework for CNNs to simultaneously achieve channel pruning and low-bit quantization by combining weight quantization with Sparse Group Lasso (SGL) regularization. We model this framework as a discretely constrained problem and solve it by Alternating Direction Method of Multipliers (ADMM). Different from previous approaches, the proposed method reduces not only model size but also computational operations. In experimental section, we evaluate the proposed framework on CIFAR datasets with several popular models such as VGG-7/16/19 and ResNet-18/34/50, which demonstrate that the proposed method can obtain low-bit networks and dramatically reduce redundant channels of the network with slight inference accuracy loss. Furthermore, we also visualize and analyze weight tensors, which showing the compact group-sparsity structure of them.
机译:通过其高计算成本和存储来阻碍大规模卷积神经网络(CNNS)的部署。在本文中,我们提出了一种用于CNNS的新颖框架,通过将权重量化与稀疏组Lasso(SGL)正规组合来同时实现信道修剪和低比特量化。我们将该框架塑造为一个离散的受限问题,通过乘法器(ADMM)的交替方向方法来解决它。与以前的方法不同,所提出的方法不仅减少了模型大小,而且减少了计算操作。在实验部分,我们评估了具有多种流行模型的CiFar数据集上提出的框架,如VGG-7 / 16/19和Reset-18/34/50,这表明所提出的方法可以获得低位网络并显着减少冗余网络通道具有略带推理精度损耗。此外,我们还可视化和分析重量张量,显示它们的紧凑型稀疏结构。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号