【24h】

Fast ConvNets Using Group-Wise Brain Damage

机译:使用集体智慧型脑损伤的快速ConvNets

获取原文

摘要

We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.
机译:我们重新审视了脑损伤的概念,即修剪神经网络的系数,并提出了如何修改脑损伤并将其用于加速ConvNets中的卷积层的建议。该方法利用了以下事实:许多有效的实现将广义卷积减少为矩阵乘法。建议的脑损伤过程以分组方式修剪卷积核张量。进行此类修剪后,可以将卷积减少为稀疏的密集矩阵的乘积,从而加快运算速度。我们研究了在学习过程中添加分组式修剪的不同方法,并表明使用分组稀疏正则化器可以实现卷积层的几倍加速。我们的方法可以调整卷积层中接受域的形状,甚至可以通过数据驱动方式从ConvNets中修剪过多的特征图。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号