首页> 外文期刊>IEEE Transactions on Computers >Efficient Mitchell’s Approximate Log Multipliers for Convolutional Neural Networks
【24h】

Efficient Mitchell’s Approximate Log Multipliers for Convolutional Neural Networks

机译:高效的Mitchell对卷积神经网络的近似日志乘法器

获取原文
获取原文并翻译 | 示例

摘要

This paper proposes energy-efficient approximate multipliers based on the Mitchell's log multiplication, optimized for performing inferences on convolutional neural networks (CNN). Various design techniques are applied to the log multiplier, including a fully-parallel LOD, efficient shift amount calculation, and exact zero computation. Additionally, the truncation of the operands is studied to create the customizable log multiplier that further reduces energy consumption. The paper also proposes using the one's complements to handle negative numbers, as an approximation of the two's complements that had been used in the prior works. The viability of the proposed designs is supported by the detailed formal analysis as well as the experimental results on CNNs. The experiments also provide insights into the effect of approximate multiplication in CNNs, identifying the importance of minimizing the range of error. The proposed customizable design at w = 8 saves up to 88 percent energy compared to the exact fixed-point multiplier at 32 bits with just a performance degradation of 0.2 percent for the ImageNet ILSVRC2012 dataset.
机译:本文提出了基于Mitchell的日志乘法的节能近似乘数,用于对卷积神经网络(CNN)进行推断进行了优化。将各种设计技术应用于日志乘法器,包括完全平行的LOD,有效移位量计算和精确计算。另外,研究了操作数的截断,以创建可定制的日志乘法器,进一步降低能量消耗。本文还提出使用该互补品来处理负数,作为在现有作品中使用的两种补充的近似值。所提出的设计的可行性得到了详细的正式分析以及CNNS上的实验结果支持。实验还提供了对CNNS中近似乘法效果的见解,识别最小化误差范围的重要性。与32位以32位的精确定点乘数相比,W = 8的提出的可定制设计可节省高达88%的能量,只需对Imagenet ILSVRC2012数据集的性能降级为0.2%的性能下降。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号