首页> 外文期刊>IEEE Transactions on Neural Networks >Convergence analysis of a deterministic discrete time system of Oja's PCA learning algorithm
【24h】

Convergence analysis of a deterministic discrete time system of Oja's PCA learning algorithm

机译:Oja PCA学习算法的确定性离散时间系统的收敛性分析

获取原文
获取原文并翻译 | 示例
           

摘要

The convergence of Oja's principal component analysis (PCA) learning algorithms is a difficult topic for direct study and analysis. Traditionally, the convergence of these algorithms is indirectly analyzed via certain deterministic continuous time (DCT) systems. Such a method will require the learning rate to converge to zero, which is not a reasonable requirement to impose in many practical applications. Recently, deterministic discrete time (DDT) systems have been proposed instead to indirectly interpret the dynamics of the learning algorithms. Unlike DCT systems, DDT systems allow learning rates to be constant (which can be a nonzero). This paper will provide some important results relating to the convergence of a DDT system of Oja's PCA learning algorithm. It has the following contributions: 1) A number of invariant sets are obtained, based on which we can show that any trajectory starting from a point in the invariant set will remain in the set forever. Thus, the nondivergence of the trajectories is guaranteed. 2) The convergence of the DDT system is analyzed rigorously. It is proven, in the paper, that almost all trajectories of the system starting from points in an invariant set will converge exponentially to the unit eigenvector associated with the largest eigenvalue of the correlation matrix. In addition, exponential convergence rate are obtained, providing useful guidelines for the selection of fast convergence learning rate. 3) Since the trajectories may diverge, the careful choice of initial vectors is an important issue. This paper suggests to use the domain of unit hyper sphere as initial vectors to guarantee convergence. 4) Simulation results will be furnished to illustrate the theoretical results achieved.
机译:Oja的主成分分析(PCA)学习算法的收敛是直接研究和分析的难题。传统上,这些算法的收敛是通过某些确定性连续时间(DCT)系统间接分析的。这种方法将要求学习率收敛到零,这在很多实际应用中都不是合理的要求。最近,已经提出了确定性离散时间(DDT)系统来间接解释学习算法的动态性。与DCT系统不同,DDT系统允许学习率恒定(可以为非零)。本文将提供与Oja的PCA学习算法的DDT系统的收敛有关的一些重要结果。它具有以下贡献:1)获得了许多不变集,基于这些不变集,我们可以证明,从不变集中的某个点开始的任何轨迹都将永远保留在该集中。因此,保证了轨迹的不发散。 2)对DDT系统的收敛性进行了严格的分析。在本文中证明,从不变集合中的点开始的系统的几乎所有轨迹都将指数收敛到与相关矩阵的最大特征值关联的单位特征向量。另外,获得了指数收敛速度,为选择快速收敛学习速度提供了有用的指导。 3)由于轨迹可能会发散,因此仔细选择初始向量是一个重要的问题。本文建议使用单位超球的域作为初始向量,以确保收敛。 4)仿真结果将提供以说明所获得的理论结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号