首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Training Lightweight Deep Convolutional Neural Networks Using Bag-of-Features Pooling
【24h】

Training Lightweight Deep Convolutional Neural Networks Using Bag-of-Features Pooling

机译:使用特征包池训练轻型深度卷积神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

Convolutional neural networks (CNNs) are predominantly used for several challenging computer vision tasks achieving state-of-the-art performance. However, CNNs are complex models that require the use of powerful hardware, both for training and deploying them. To this end, a quantization-based pooling method is proposed in this paper. The proposed method is inspired from the bag-of-features model and can be used for learning more lightweight deep neural networks. Trainable radial basis function neurons are used to quantize the activations of the final convolutional layer, reducing the number of parameters in the network and allowing for natively classifying images of various sizes. The proposed method employs differentiable quantization and aggregation layers leading to an end-to-end trainable CNN architecture. Furthermore, a fast linear variant of the proposed method is introduced and discussed, providing new insight for understanding convolutional neural architectures. The ability of the proposed method to reduce the size of CNNs and increase the performance over other competitive methods is demonstrated using seven data sets and three different learning tasks (classification, regression, and retrieval).
机译:卷积神经网络(CNN)主要用于一些具有挑战性的计算机视觉任务,以实现最先进的性能。但是,CNN是复杂的模型,需要使用功能强大的硬件来训练和部署它们。为此,本文提出了一种基于量化的合并方法。所提出的方法从功能袋模型中获得启发,可用于学习更轻量级的深度神经网络。可训练的径向基函数神经元用于量化最终卷积层的激活,从而减少网络中参数的数量,并允许对各种尺寸的图像进行自然分类。所提出的方法采用了可微化的量化和聚合层,从而形成了端到端的可训练CNN体系结构。此外,介绍和讨论了所提出方法的快速线性变体,为理解卷积神经体系结构提供了新的见识。使用七个数据集和三个不同的学习任务(分类,回归和检索)证明了所提出方法与其他竞争方法相比能够减少CNN大小并提高性能的能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号