首页> 外文期刊>IEEE Transactions on Computers >Finite precision error analysis of neural network hardware implementations
【24h】

Finite precision error analysis of neural network hardware implementations

机译:神经网络硬件实现的有限精度误差分析

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Through parallel processing, low precision fixed point hardware can be used to build a very high speed neural network computing engine where the low precision results in a drastic reduction in system cost. The reduced silicon area required to implement a single processing unit is taken advantage of by implementing multiple processing units on a single piece of silicon and operating them in parallel. The important question which arises is how much precision is required to implement neural network algorithms on this low precision hardware. A theoretical analysis of error due to finite precision computation was undertaken to determine the necessary precision for successful forward retrieving and back-propagation learning in a multilayer perceptron. This analysis can easily be further extended to provide a general finite precision analysis technique by which most neural network algorithms under any set of hardware constraints may be evaluated.
机译:通过并行处理,可以使用低精度定点硬件来构建非常高速的神经网络计算引擎,其中低精度会导致系统成本的大幅降低。通过在单个硅片上实现多个处理单元并并行操作,可以利用实现单个处理单元所需的减少的硅面积。出现的重要问题是在这种低精度硬件上实现神经网络算法需要多少精度。对由于有限精度计算而引起的误差进行了理论分析,以确定在多层感知器中成功进行正向检索和反向传播学习所需的精度。可以轻松地进一步扩展此分析,以提供通用的有限精度分析技术,通过该技术,可以评估在任何硬件约束条件下的大多数神经网络算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号