首页> 外文期刊>Computers & mathematics with applications >Global Convergence of a PCA Learning Algorithm with a Constant Learning Rate
【24h】

Global Convergence of a PCA Learning Algorithm with a Constant Learning Rate

机译:具有恒定学习率的PCA学习算法的全局收敛性

获取原文
获取原文并翻译 | 示例
       

摘要

In most of existing principal components analysis (PCA) learning algorithms, the learning rates are required to approach zero as learning step increases. However, in many practical applications, due to computational round-off limitations and tracking requirements, constant learning rates must be used. This paper proposes a PCA learning algorithm with a constant learning rate. It will prove via DDT (Deterministic Discrete Time) method that this PCA learning algorithm is globally convergent. Simulations are carried out to illustrate the theory.
机译:在大多数现有的主成分分析(PCA)学习算法中,随着学习步长的增加,学习率必须接近零。但是,在许多实际应用中,由于计算舍入限制和跟踪要求,必须使用恒定的学习率。本文提出了一种具有恒定学习率的PCA学习算法。通过DDT(确定性离散时间)方法将证明该PCA学习算法是全局收敛的。进行仿真以说明该理论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号