首页> 外文会议>IASTED international conference intelligent systems and control >Integration of Back Propagation Algorithm and Least-Squares Method for Fast Training Multilayer Perceptrons
【24h】

Integration of Back Propagation Algorithm and Least-Squares Method for Fast Training Multilayer Perceptrons

机译:回到传播算法的集成和最小二乘法对快速训练的多层训练方法

获取原文
获取外文期刊封面目录资料

摘要

In this paper, a novel fast training algorithm which integrates the Back Propagation algorithm (BP) and least-squares method (LSM), called IBPLSM for muitilayer perceptrons, is proposed. Back Propagation algorithm is currently the most widely used learning algorithm in artificial neural networks. With properly selection of feed-forward neural network architecture and parameters, it is capable of approximating most problems with high accuracy and generalization ability. It is known that the slow convergence and easy trapping into local minimum are two main drawbacks when using the well-known Back Propagation learning algorithm in many applications. Furthermore, the least-squares method is also an efficient method for training muitilayer perceptrons because of its fast convergence characteristics. However, this method is not suitable for training large-scale neural networks. In order to improve the slow convergence of Back Propagation algorithm and expand the least-squares method for training large-scale neural networks, an integrated training algorithm is proposed. To show the workings, the new algorithm has been implemented and tested on some real-world problems. The experimental results show that our proposed method can converge faster than the original Back Propagation algorithm and some enhanced learning algorithms and can escape from local minimum to get a better convergence error. Also our proposed method can work well for training some large-scale neural networks.
机译:本文提出了一种新颖的快速训练算法,其集成了回到传播算法(BP)和最小二乘法(LSM),称为Muitilayer Perceptrons的IBPLSM。反向传播算法目前是人工神经网络中最广泛使用的学习算法。通过正确选择前馈神经网络架构和参数,它能够逼近大多数问题,具有高精度和泛化能力。众所周知,当在许多应用中使用众所周知的后传播学习算法时,慢的收敛和易于捕获到局部最小值是两个主要缺点。此外,由于其快速收敛特性,最小二乘法也是培训Muitilayer Perceptrons的有效方法。然而,这种方法不适合培训大型神经网络。为了提高背部传播算法的缓慢收敛性并扩展用于训练大规模神经网络的最小二乘法,提出了一种集成训练算法。为了展示工作,已经在一些现实问题上实现和测试了新算法。实验结果表明,我们所提出的方法可以比原始的背传播算法和一些增强的学习算法收敛,并可以从局部最小逃脱以获得更好的收敛误差。我们所提出的方法也可以很好地培训一些大规模的神经网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号