首页> 外文期刊>Microprocessors and microsystems >VLSI implementation of transcendental function hyperbolic tangent for deep neural network accelerators
【24h】

VLSI implementation of transcendental function hyperbolic tangent for deep neural network accelerators

机译:基于神经网络加速器的超晶函数双曲线的VLSI实现

获取原文
获取原文并翻译 | 示例
           

摘要

Extensive use of neural network applications prompted researchers to customize a design to speed up their computation based on ASIC implementation. The choice of activation function (AF) in a neural network is an essential requirement. Accurate design architecture of an AF in a digital network faces various challenges as these AF require more hardware resources because of its non-linear nature. This paper proposed an efficient approximation scheme for hyperbolic tangent (tanh) function which purely based on combinational design architecture. The approximation is based on mathematical analysis by considering maximum allowable error in a neural network. The results prove that the proposed combinational design of an AF is efficient in terms of area, power and delay with negligible accuracy loss on MNIST and CIFAR-10 benchmark datasets. Post synthesis results show that the proposed design area is reduced by 66% and delay is reduced by nearly 16% compared to state-of-the-art.
机译:广泛使用神经网络应用程序提示研究人员自定义设计,以加快基于ASIC实现来加快计算的。 神经网络中激活功能(AF)的选择是必不可少的要求。 数字网络中AF的精确设计架构面临各种挑战,因为由于其非线性性质,这些AF需要更多硬件资源。 本文提出了一种用于双曲线切线(Tanh)功能的有效近似方案,纯粹基于组合设计架构。 近似基于通过考虑神经网络中的最大允许误差来基于数学分析。 结果证明,在MNIST和CIFAR-10基准数据集上具有可忽略的准确性损失,AF的组合设计在面积,功率和延迟方面是有效的。 后合成结果表明,与现有技术相比,所提出的设计区域减少了66%,延迟减少了近16%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号