【24h】

The Improvements of BP Neural Network Learning Algorithm

机译:BP神经网络学习算法的改进

获取原文
获取外文期刊封面目录资料

摘要

The back-propagation algorithm(BP) is a wellknown method of training a multilayer Feed-Forward Artificial Neural Networks(FFANNS). Although the algorithm is successful, it has some disadvantages. Because of adopting the gradient method by BP neural network, the problems including slowly learning convergent velocity and easily converging to local minimum can not be avoided. In addition, the selection of learning factor and inertial factor affects the convergence of BP neural network, which are usually determined by experiences. Therefore the effective application of BP neural network is limited. In this paper a new method in BP algorithm to avoid local minimum was proposed by means of adding gradually training data and hidden units. In addition, the paper also proposed a new model of controllable feed-forward neural network.
机译:反向传播算法(BP)是训练多层前馈人工神经网络(FFANNS)的一种众所周知的方法。尽管该算法是成功的,但它也有一些缺点。由于采用了BP神经网络的梯度法,无法避免收敛速度慢,容易收敛到局部最小值的问题。另外,学习因子和惯性因子的选择会影响BP神经网络的收敛性,而BP神经网络的收敛性通常取决于经验。因此,BP神经网络的有效应用受到了限制。通过逐步增加训练数据和隐藏单元,提出了一种新的BP算法避免局部极小值的新方法。此外,本文还提出了一种可控前馈神经网络的新模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号