首页> 外文会议>IEEE/ACM International Symposium on Low Power Electronics and Design >Gabor filter assisted energy efficient fast learning Convolutional Neural Networks
【24h】

Gabor filter assisted energy efficient fast learning Convolutional Neural Networks

机译:Gabor过滤器辅助节能快速学习卷积神经网络

获取原文

摘要

Convolutional Neural Networks (CNN) are being increasingly used in computer vision for a wide range of classification and recognition problems. However, training these large networks demands high computational time and energy requirements; hence, their energy-efficient implementation is of great interest. In this work, we reduce the training complexity of CNNs by replacing certain weight kernels of a CNN with Gabor filters. The convolutional layers use the Gabor filters as fixed weight kernels, which extracts intrinsic features, with regular trainable weight kernels. This combination creates a balanced system that gives better training performance in terms of energy and time, compared to the standalone CNN (without any Gabor kernels), in exchange for tolerable accuracy degradation. We show that the accuracy degradation can be mitigated by partially training the Gabor kernels, for a small fraction of the total training cycles. We evaluated the proposed approach on 4 benchmark applications. Simple tasks like face detection and character recognition (MNIST and TiCH), were implemented using LeNet architecture. While a more complex task of objet recognition (CIFAR10) was implemented on a state-of-the-art deep CNN (Network in Network) architecture. The proposed approach yields 1.31–1.53× improvement in training energy in comparison to conventional CNN implementation. We also obtain improvement up to 1.4× in training time, up to 2.23× in storage requirements, and up to 2.2× in memory access energy. The accuracy degradation suffered by the approximate implementations is within 0– 3% of the baseline.
机译:卷积神经网络(CNN)越来越多地用于计算机愿景,以获得各种分类和识别问题。然而,培训这些大型网络需要高计算时间和能源需求;因此,他们的节能实施非常令人兴趣。在这项工作中,我们通过用Gabor滤波器替换CNN的一定重量核来减少CNN的训练复杂性。卷积层使用Gabor过滤器作为固定重量核,其提取内在特征,具有定期培训的重量核。这种组合创造了一个平衡系统,与独立CNN(没有任何Gabor核)相比,在能量和时间方面具有更好的培训性能,以换取可容忍的精度下降。我们表明,通过部分训练Gabor核,可以减轻精度降解,用于总训练周期的一小部分。我们在4个基准申请中评估了所提出的方法。使用Lenet架构实现简单的任务,如面部检测和字符识别(MNIST和TICH)。虽然对Objet识别(CIFAR10)的更复杂任务是在最先进的深CNN(网络中的网络)架构上实施。与常规CNN实施相比,所提出的方法收益率为1.31-1.53​​×培训能量的提高。我们还在训练时间内获得了高达1.4倍的提高,存储要求高达2.23倍,内存访问能量高达2.2倍。近似实现遭受的精度降级在基线的0-3 %之内。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号