【24h】

Learning to Slim Deep Networks with Bandit Channel Pruning

机译:使用强盗渠道修剪学习苗条的深度网络

获取原文

摘要

Recent years, deep neural network has achieved great success in machine vision, natural language processing, and reinforcement learning. While deploying these models on embedded devices and large clusters faces challenge in high energy consumption and low efficiency. In this paper, we propose an effective approach named Bandit Channel Pruning (BCP) to accelerate neural network by channel-level pruning.Inspired by autoML, we use Multi-Armed Bandit (MAB) method to explore and exploit the impact of each channel on model performance. Specifically, we use the loss value of model’s output as penalty term to find the set of redundant channels. In addition, we prove that the change of this loss value can be used as criterion of channel redundant. We analyze the complexity of BCP and give the upper bound of search times.Our approach is validated with several deep neural networks, including VGGNet, ResNet56, ResNet110, on different image classification datasets. Extensive experiments on these models and datasets demonstrate the performance of this method is better than state-of-the-art channel pruning methods.
机译:近年来,深度神经网络在机器视觉,自然语言处理和加强学习方面取得了巨大成功。在将这些模型部署在嵌入式设备上,大集群面临高能消耗和低效率的挑战。在本文中,我们提出了一种名为Bairit信道修剪(BCP)的有效方法,以通过频道级修剪加速神经网络。通过AutomL,我们使用多武装强盗(MAB)方法来探索和利用每个通道的影响模型性能。具体来说,我们使用模型输出的损失值作为惩罚术语来查找冗余通道集。此外,我们证明了这种损失值的变化可以用作冗余通道的标准。我们分析了BCP的复杂性,并提供了搜索时代的上限。我们的方法是用几个深神经网络验证,包括Vggnet,Resnet56,Resnet110,在不同的图像分类数据集上。对这些模型和数据集的广泛实验证明了这种方法的性能优于最先进的频道修剪方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号