首页> 外文会议>Pacific Rim Conference on Communications, Computers and Signal Processing >Functionally-Predefined Kernel: a Way to Reduce CNN Computation
【24h】

Functionally-Predefined Kernel: a Way to Reduce CNN Computation

机译:功能预定义的内核:减少CNN计算的一种方法

获取原文

摘要

Convolutional Neural Networks (CNNs) have achieved high classification accuracy in image recognition, and now, they are widely used for numerous applications. For higher accuracy or more advanced applications, CNNs need to consume tremendous computational resources and time. Hence, many studies for reducing the computational cost of CNNs are actively being conducted. However, many previous methods for reducing the computational cost lead to a non-negligible loss in output accuracy. Therefore, it is still a challenge to reduce the computational cost of CNNs with keeping the output accuracy high. In this paper, we propose a novel concept "Functionally-Predefined Kernel" to reduce the computational cost for CNN training and discuss the potential of computation reuse to reduce the computational cost for CNN inference. Our experimental results show that the number of parameters to be trained can be significantly reduced by utilizing Functionally-Predefined Kernels without accuracy loss. In addition, we revealed that CNN's inference process includes many convolution operations with the same inputs and computation reuse, therefore, has high affinity to CNN computation.
机译:卷积神经网络(CNN)在图像识别方面已经实现了很高的分类精度,现在,它们被广泛用于众多应用中。对于更高的精度或更高级的应用程序,CNN需要消耗大量的计算资源和时间。因此,正在积极地进行许多用于减少CNN的计算成本的研究。但是,许多用于降低计算成本的先前方法导致输出精度的损失不可忽略。因此,在保持高输出精度的同时降低CNN的计算成本仍然是一个挑战。在本文中,我们提出了一个新颖的概念“功能预定义的内核”,以减少CNN训练的计算成本,并讨论了计算重用的潜力,以减少CNN推理的计算成本。我们的实验结果表明,通过使用功能预定义的内核可以显着减少要训练的参数数量,而不会降低准确性。此外,我们发现CNN的推理过程包括许多具有相同输入和计算重用的卷积运算,因此对CNN计算具有高度的亲和力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号