首页> 外文会议>Biennial conference of the Canadian Society for computational studies of intelligence >Accelerated Backpropagation Learning: Extended Dynamic Parallel Tangent Optimization Algorithm
【24h】

Accelerated Backpropagation Learning: Extended Dynamic Parallel Tangent Optimization Algorithm

机译:加速BackProjagation学习:扩展动态并行切线优化算法

获取原文

摘要

The backpropagation algorithm is an iterative gradient descent algorithm designed to train multilayer neural networks. Despite its popularity and effectiveness, the orthogonal steps (zigzagging) near the optimum point slows down the convergence of this algorithm. To overcome the inefficiency of zigzagging in the conventional backpropagation algorithm, one of the authors earlier proposed the use of a deflecting gradient technique to improve the convergence of backpropagation learning algorithm. The proposed method is called Partan backpropagation learning algorithm[3]. The convergence time of multilayer networks has further improved through dynamic adaptation of their learning rates[6]. In this paper, an extension to the dynamic parallel tangent learning algorithm is proposed. In the proposed algorithm, each connection has its own learning as well as acceleration rate. These individual rates are dynamically adapted as the learning proceeds. Simulation studies are carried out on different learning problems. Faster rate of convergence is achieved for all problems used in the simulations.
机译:BackPropagation算法是一种迭代梯度下降算法,旨在训练多层神经网络。尽管其受欢迎程度和有效性,但是在最佳点附近的正交步骤(Zigzagging)减慢了该算法的收敛性。为了克服传统的反向衰减算法中的曲折效率,提出了一种作者提出了使用偏转梯度技术来提高背部衰减学习算法的收敛性。该方法称为Partan BackProjagation学习算法[3]。通过动态适应学习速率[6],多层网络的收敛时间进一步提高了[6]。在本文中,提出了对动态并行切线学习算法的扩展。在所提出的算法中,每个连接都有自己的学习以及加速率。这些单独的税率随着学习所需而动态调整。仿真研究是在不同的学习问题上进行的。对于模拟中使用的所有问题,实现了更快的收敛速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号