首页> 外国专利> LEARNING METHOD OF MULTI-LAYER PERCEPTRONS WITH N-BIT DATA PRECISION

LEARNING METHOD OF MULTI-LAYER PERCEPTRONS WITH N-BIT DATA PRECISION

机译:具有n位数据精度的多层感知器的学习方法

摘要

The present invention relates to a method of learning by N-bit data representation of a multilayer perceptron neural network. In the conventional digital learning, large numbers and small numbers cause underflow and overflow by quantification. In order to solve the problem that the size of data representation bits had to be large due to the constraints of bit truncation, the weighted sum calculation has 2N bit data precision in the forward and backward calculations in the N-bit digital learning of the multilayer perceptron. When expressing the weighted sum result by 2N-bit data precision as N-bit data for sigmoid nonlinear transformation in omni-computation, set the maximum value of N-bit representation to the value corresponding to the saturation region of sigmoid, In the perceptron reverse calculation, the weighted sum result by 2N bit data precision is N. When expressed in bit data, the maximum value of the N-bit representation is set to be relatively smaller than the maximum value represented by 2N bits, the weight expression range at the beginning of the learning is reduced, and the weighting ratio is reached when the constant ratio reaches the maximum value as the learning progresses. By performing a learning method that extends the range of, the 8-bit digital learning performance can be improved by 16-bit digital learning performance.
机译:本发明涉及一种通过N位数据表示学习多层感知器神经网络的方法。在常规的数字学习中,大量和少量通过量化导致下溢和上溢。为了解决由于位截断的限制而导致数据表示位的大小过大的问题,在多层N位数字学习的前向和后向计算中,加权和计算具有2N位数据精度。感知器。将以2N位数据精度表示的加权和结果表示为用于全向计算的S型非线性变换的N位数据时,请将N位表示的最大值设置为与S型饱和区域相对应的值。计算时,以2N位数据精度表示的加权和结果为N。当以位数据表示时,N位表示的最大值设置为相对小于2N位表示的最大值,即学习的开始被减少,并且随着学习的进行,当恒定比率达到最大值时达到加权比率。通过执行扩展学习范围的学习方法,可以通过16位数字学习性能提高8位数字学习性能。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号