【24h】

Online PCA with Optimal Regrets

机译:在线PCA具有最佳遗憾

获取原文

摘要

We carefully investigate the online version of PCA, where in each trial a learning algorithm plays a k-dimensional subspace, and suffers the compression loss on the next instance when projected into the chosen subspace. In this setting, we give regret bounds for two popular online algorithms, Gradient Descent (GD) and Matrix Exponentiated Gradient (MEG). We show that both algorithms are essentially optimal in the worst-case when the regret is expressed as a function of the number of trials. This comes as a surprise, since MEG is commonly believed to perform sub-optimally when the instances are sparse. This different behavior of MEG for PCA is mainly related to the non-negativity of the loss in this case, which makes the PCA setting qualitatively different from other settings studied in the literature. Furthermore, we show that when considering regret bounds as a function of a loss budget, MEG remains optimal and strictly outperforms GD. Next, we study a generalization of the online PCA problem, in which the Nature is allowed to play with dense instances, which are positive matrices with bounded largest eigenvalue. Again we can show that MEG is optimal and strictly better than GD in this setting.
机译:我们仔细调查了PCA的在线版本,在每次试验中,学习算法播放K维子空间,并且在将所选子空间投影时遭受下一个例子上的压缩损失。在此设置中,我们对两个受欢迎的在线算法,梯度下降(GD)和矩阵指数梯度(MEG)给出后悔界限。我们表明,在最遗憾地表达作为试验次数的函数时,这两个算法在最坏情况下都是最佳的。这是一个令人惊讶的是,因为梅格通常被认为当实例稀疏时,据信。 MEG对于PCA的这种不同的行为主要与这种情况下损失的非消极性有关,这使得PCA设置与文献中研究的其他设置定性不同。此外,我们表明,在考虑后悔界限作为损失预算的函数时,MEG仍然是最佳的,严格优于GD。接下来,我们研究在线PCA问题的概括,其中允许性质与密集的实例一起玩,这是具有有界最大特征值的正矩阵。我们可以展示MEG在此设置中最佳,并且严格比GD更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号