首页> 外文会议>Intelligent Data Engineering and Automated Learning >A note on learning automata based schemes for adaptation of BP parameters
【24h】

A note on learning automata based schemes for adaptation of BP parameters

机译:基于自动机基于自动机的方案的注意事项

获取原文

摘要

Backpropagation is often used as the learning algorithm in layered-structure neural networks, because of its efficiency. However, backpropagation is not free from problems. The learning process sometimes gets trapped in a local minimum and the network cannot produce the required response. In addition, The algorithm has number of parameters such learning rate (μ), momentum factor (α) and steepness parameter (λ), whose values are not known in advance, and must be determined by trail and error. The appropriate selection of these parameters have large effect on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. A class of algorithms which are developed recently uses learning automata (LA) for adjusting the parameters μ, α, and λ based on the observation of random response of the neural networks. One of the important aspects of learning automata based schemes is its remarkable effectiveness as a solution for increasing the speed of convergence. Another important aspect of learning automata based schemes which has not been pointed out earlier is its ability to escape from local minima with high possibility during the training period. In this report we study the ability of LA based schemes in escaping from local minima when standard BP fails to find the global minima. It is demonstrated through simulation that LA based schemes comparing to other schemes such as SAB, Super SAB, Fuzzy BP, ASBP method, and VLR method have higher ability in escaping from local minima.
机译:由于其效率,BackPropagation通常用作分层结构神经网络中的学习算法。但是,Backpropagation没有摆脱问题。学习过程有时被困在本地最小值中,网络不能产生所需的响应。另外,该算法具有参数的数量,这种学习率(μ),动量因子(α)和陡度参数(λ),其值预先知道,并且必须通过路径和错误来确定。 The appropriate selection of these parameters have large effect on the convergence of the algorithm.已经开发了许多自适应调整这些参数的技术以提高收敛速度。最近开发的一类算法使用学习自动机(LA)来根据神经网络的随机响应观察来调整参数μ,α和λ。基于自动机的方案的重要方面之一是其作为提高收敛速度的解决方案的效果。尚未指出的基于自动机的计划的另一个重要方面是其在培训期间高可能具有高可能性的能力。在本报告中,当标准BP无法找到全球最小值时,我们研究了LA基于LA基于LA逃离的方案的能力。通过仿真证明了与SAB,超级SAB,模糊BP,ASBP方法和VLR方法等其他方案相比的基于LA的方案具有更高的销售局部最小值的能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号