首页> 外文会议>IEEE Region 10 Annual Conference >Accelerating parallel tangent learning for neural networks through dynamic self-adaptation
【24h】

Accelerating parallel tangent learning for neural networks through dynamic self-adaptation

机译:通过动态自适应加速对神经网络的平行切线学习

获取原文

摘要

In gradient based learning algorithms, momentum usually has an improving effect on convergence rate and reduces zigzagging phenomena but sometimes it causes the convergence rate to decrease. The parallel tangent (partan) gradient is used as a deflecting method to improve the convergence. In this paper, we modify the gradient partan algorithm for learning the neural networks by using two different learning rates, one for gradient search and the other for accelerating through parallel tangent, respectively. Moreover, the dynamic self-adaptation of the learning rate is used to improve the performance. In dynamic self adaptation, each learning rate is adapted locally to the cost function landscape and the previous learning rate. Finally we test the proposed algorithm, called the accelerated partan on various problems such as xor and encoders. We compare the results with those of dynamic self adaptation of learning rate and momentum.
机译:在基于梯度的学习算法中,动量通常具有改善对收敛速率的影响,并减少了曲折现象,但有时它会导致收敛速度降低。并行切线(Partan)梯度用作改善收敛的偏转方法。在本文中,我们通过使用两种不同的学习速率来修改用于学习神经网络的梯度部分算法,一个用于梯度搜索,另一个用于通过并行切线加速。此外,学习率的动态自适应用于提高性能。在动态自适应中,每个学习率在本地调整到成本函数景观和先前的学习率。最后,我们测试了所提出的算法,称为加速的Parean在XOR和编码器等各种问题上。我们将结果与动态自适应的学习率和势头进行比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号