首页> 外文会议>International Forum on Applications of Neural Networks to Power Systems >Finite precision error analysis for neural network learning
【24h】

Finite precision error analysis for neural network learning

机译:神经网络学习有限精度误差分析

获取原文

摘要

The high speed desired in the implementation of many neural network algorithms, such as backpropagation learning in a multilayer perceptron (MLP), may be attained through the use of finite precision hardware. This finite precision hardware, however, is prone to errors. A method of theoretically deriving and statistically evaluating this error is presented and could be used as a guide to the details of hardware design and algorithm implementation. The paper is devoted to the derivation of the techniques involved as well as the details of the backpropagation example. The intent is to provide a general framework by which most neural network algorithms under any set of hardware constraints may be evaluated.
机译:可以通过使用有限精度硬件实现许多神经网络算法中的许多神经网络算法中所需的高速,例如在多层的Perceptron(MLP)中的反向慢化学习。然而,这种有限的精密硬件容易出错。提出了一种理论上派生和统计评估该错误的方法,可以用作硬件设计和算法实现的细节的指南。本文致力于涉及技术的推导以及背部agagation示例的细节。意图是提供一种通用框架,可以评估大多数神经网络算法的大多数神经网络算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号