...
【24h】

An efficient hardware implementation of feed-forward neural networks

机译:前馈神经网络的高效硬件实现

获取原文
获取原文并翻译 | 示例
           

摘要

This paper proposes a new way of digital hardware implementation of nonlinear activation functions in feed-forward neural networks. The basic idea of this new realization is that the nonlinear functions can be implemented using a matrix-vector multiplication. Recently a new approach was proposed for the efficient realization of matrix-vector multipliers, and this approach can be applied for implementing nonlinear functions if these functions are approximated by simple basis functions. The paper proposes to use B-spline basis functions to approximate nonlinear sigmoidal functions, it shows that this approximation fulfils the general requirements on the activation functions, presents the details of the proposed hardware implementation, and gives a summary of an extensive study about the effects of B-spline nonlinear function realization on the size and the trainability of feed-forward neural networks.
机译:本文提出了一种前馈神经网络中非线性激活函数的数字硬件实现新方法。这种新实现的基本思想是可以使用矩阵向量乘法来实现非线性函数。最近,提出了一种有效实现矩阵矢量乘法器的新方法,并且如果这些函数被简单的基函数近似,则该方法可用于实现非线性函数。本文提出使用B样条基函数来近似非线性S形函数,它表明该近似满足激活函数的一般要求,给出了所提出的硬件实现的细节,并给出了关于影响的广泛研究的摘要。样条非线性函数实现对前馈神经网络的大小和可训练性的影响

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号