首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups
【24h】

Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups

机译:深入探讨:使用分层过滤器组提高CNN效率

获取原文

摘要

We propose a new method for creating computationally efficient and compact convolutional neural networks (CNNs) using a novel sparse connection structure that resembles a tree root. This allows a significant reduction in computational cost and number of parameters compared to state-of-the-art deep CNNs, without compromising accuracy, by exploiting the sparsity of inter-layer filter dependencies. We validate our approach by using it to train more efficient variants of state-of-the-art CNN architectures, evaluated on the CIFAR10 and ILSVRC datasets. Our results show similar or higher accuracy than the baseline architectures with much less computation, as measured by CPU and GPU timings. For example, for ResNet 50, our model has 40% fewer parameters, 45% fewer floating point operations, and is 31% (12%) faster on a CPU (GPU). For the deeper ResNet 200 our model has 48% fewer parameters and 27% fewer floating point operations, while maintaining state-of-the-art accuracy. For GoogLeNet, our model has 7% fewer parameters and is 21% (16%) faster on a CPU (GPU).
机译:我们提出了一种使用类似于树根的稀疏连接结构来创建计算效率高且紧凑的卷积神经网络(CNN)的新方法。通过利用层间滤波器相关性的稀疏性,与最新的深层CNN相比,这可以显着降低计算成本和参数数量,而不会影响准确性。我们通过使用它来训练更先进的CNN架构变体(在CIFAR10和ILSVRC数据集上进行评估)来验证我们的方法。我们的结果显示出与基准架构相似或更高的精度,而计算量却要少得多(由CPU和GPU时序衡量)。例如,对于ResNet 50,我们的模型减少了40%的参数,减少了45%的浮点运算,并且在CPU(GPU)上的速度提高了31%(12%)。对于更深的ResNet 200,我们的模型在保持最新精度的同时,参数减少了48%,浮点运算减少了27%。对于GoogLeNet,我们的模型减少了7%的参数,而在CPU(GPU)上的速度提高了21%(16%)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号