首页> 外文期刊>Theoretical computer science >Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning
【24h】

Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning

机译:Oja的PCA学习算法与非零接近自适应学习的全局收敛

获取原文
获取原文并翻译 | 示例
       

摘要

A non-zero-approaching adaptive learning rate is proposed to guarantee the global convergence of Oja's principal component analysis (PCA) learning algorithm. Most of the existing adaptive learning rates for Oja's PCA learning algorithm are required to approach zero as the learning step increases. However, this is not practical in many applications due to the computational round-off limitations and tracking requirements. The proposed adaptive learning rate overcomes this shortcoming. The learning rate converges to a positive constant, thus it increases the evolution rate as the learning step increases. This is different from learning rates which approach zero which slow the convergence considerably and increasingly with time. Rigorous mathematical proofs for global convergence of Oja's algorithm with the proposed learning rate are given in detail via studying the convergence of an equivalent deterministic discrete time (DDT) system. Extensive simulations are carried out to illustrate and verify the theory derived. Simulation results show that this adaptive learning rate is more suitable for Oja's PCA algorithm to be used in an online learning situation.
机译:为了保证Oja主成分分析(PCA)学习算法的全局收敛性,提出了一种非零接近的自适应学习率。随着学习步骤的增加,Oja的PCA学习算法的大多数现有自适应学习率都需要接近零。但是,由于计算舍入限制和跟踪要求,这在许多应用中不切实际。所提出的自适应学习率克服了该缺点。学习速率收敛到正常数,因此随着学习步长的增加,进化速率也增加。这与接近零的学习率不同,后者接近于零,并且随着时间的流逝越来越多。通过研究等效确定性离散时间(DDT)系统的收敛性,详细给出了Oja算法具有建议的学习率的全局收敛性的严格数学证明。进行了广泛的仿真,以说明和验证所推导的理论。仿真结果表明,这种自适应学习率更适合Oja的PCA算法用于在线学习情况。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号