首页> 外文会议>IEEE Annual Ubiquitous Computing, Electronics amp;amp;amp; Mobile Communication Conference >Residual Squeeze CNDS Deep Learning CNN Model for Very Large Scale Places Image Recognition
【24h】

Residual Squeeze CNDS Deep Learning CNN Model for Very Large Scale Places Image Recognition

机译:剩余挤压CNDS深度学习CNN模型为大规模的图像识别

获取原文
获取外文期刊封面目录资料

摘要

Deep convolutional neural network models have achieved great success in the recent years. However, the optimization of size and the time needed to train a deep network is a research area that needs much improvement. In this paper, we address the issue of speed and size by proposing a compressed convolutional neural network model namely Residual Squeeze CNDS. Proposed models compresses the earlier very successful Residual-CNDS network and further improves on following aspects: (1) small model size, (2) faster speed, (3) uses residual learning for faster convergence, better generalization, and solves the issue of degradation, (4) matches the recognition accuracy of the non-compressed model on the very large-scale grand challenge MIT Places 365-Standard scene dataset. In comparison to Residual-CNDS the proposed model is 87.64% smaller in size and 13.33% faster in the training time. This supports our claim that the proposed model inherits the best aspects of Residual-CNDS model and further improves upon it. Moreover, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. In comparison to SQUEEZENET our proposed framework can be more easily adapted and fully integrated with the residual learning for compressing various other contemporary deep learning convolutional neural network models.
机译:深度卷积神经网络模型在近年来取得了巨大的成功。然而,培训深网络所需的尺寸和时间的优化是需要多大改进的研究区域。在本文中,我们通过提出压缩卷积神经网络模型即残余挤压CND来解决速度和尺寸问题。提出的模型压缩了前面的非常成功的残差网络,进一步提高了以下方面:(1)小型型号大小,(2)更快的速度,(3)使用剩余学习,以更快的收敛,更好的泛化,解决劣化问题。 (4)匹配非压缩模型的识别准确性在非常大规模的大挑战MIT 365标准场景数据集上。与残留CNDS相比,拟议模型的尺寸较小87.64%,培训时间更快13.33%。这支持我们的声明,所提出的模型继承了残留CNDS模型的最佳方面,并进一步改进。此外,我们展示了我们以更纪要的方法来寻找新型CNN架构的设计空间。与Screezenet相比,我们所提出的框架可以更容易地调整和完全集成,并与剩余学习压缩各种其他当代深度学习卷积神经网络模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号