首页> 外文OA文献 >Solving the local minimum and flat-spot problem by modifying wrong outputs for feed-forward neural networks
【2h】

Solving the local minimum and flat-spot problem by modifying wrong outputs for feed-forward neural networks

机译:通过修改前馈神经网络的错误输出来解决局部最小和平坦问题

摘要

Backpropagation (BP) algorithm, which is very popular in supervised learning, is extensively applied in training feed-forward neural networks. Many modifications have been proposed to speed up the convergence process of the standard BP algorithm. However, they seldom focus on improving the global convergence capability. This paper proposes a new algorithm called Wrong Output Modification (WOM) to improve the global convergence capability of a fast learning algorithm. When a learning process is trapped by a local minimum or a flat-spot area, this algorithm looks for some outputs that go to other extremes when compared with their target outputs, and then it modifies such outputs systemically so that they can get close to their target outputs and hence some weights of neurons are changed accordingly. It is hoped that these changes make the learning process escape from such local minima or flat-spot areas and then converge. The performance investigation shows that the proposed algorithm can be applied into different fast learning algorithms, and their global convergence capabilities are improved significantly compared with their original algorithms. Moreover, some statistical data obtained from this algorithm can be used to identify the difficulty of a learning problem.
机译:在有监督学习中非常流行的反向传播(BP)算法已广泛应用于训练前馈神经网络。已经提出许多修改以加速标准BP算法的收敛过程。但是,他们很少关注于提高全球融合能力。本文提出了一种新的算法,称为错误输出修改(WOM),以提高快速学习算法的全局收敛能力。当学习过程被局部最小值或平坦区域困住时,该算法会寻找一些与目标输出相比达到其他极限的输出,然后对这些输出进行系统地修改,以便它们可以接近其输出。目标输出,因此神经元的某些权重也会相应更改。希望这些变化能使学习过程脱离这样的局部最小值或平坦区域,然后收敛。性能研究表明,所提出的算法可以应用于不同的快速学习算法中,并且与原始算法相比,全局收敛能力大大提高。此外,从该算法获得的一些统计数据可用于识别学习问题的难度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号