...
首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >Large-Scale Low-Rank Matrix Learning with Nonconvex Regularizers
【24h】

Large-Scale Low-Rank Matrix Learning with Nonconvex Regularizers

机译:具有非凸正则化器的大规模低秩矩阵学习

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Low-rank modeling has many important applications in computer vision and machine learning. While the matrix rank is often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better empirical performance. However, the resulting optimization problem is much more challenging. Recent state-of-the-art requires an expensive full SVD in each iteration. In this paper, we show that for many commonly-used nonconvex low-rank regularizers, the singular values obtained from the proximal operator can be automatically threshold. This allows the proximal operator to be efficiently approximated by the power method. We then develop a fast proximal algorithm and its accelerated variant with inexact proximal step. It can be guaranteed that the squared distance between consecutive iterates converges at a rate of O(1/T), where T is the number of iterations. Furthermore, we show the proposed algorithm can be parallelized, and the resultant algorithm achieves nearly linear speedup w.r.t. the number of threads. Extensive experiments are performed on matrix completion and robust principal component analysis. Significant speedup over the state-of-the-art is observed.
机译:低等级建模在计算机视觉和机器学习中具有许多重要应用。虽然矩阵秩通常由凸核范数来近似,但使用非凸低秩正则化函数已显示出更好的经验性能。但是,由此产生的优化问题更具挑战性。最新的技术要求每次迭代都需要昂贵的完整SVD。在本文中,我们表明,对于许多常用的非凸低秩正则化器,从近端算子获得的奇异值可以自动设为阈值。这允许通过幂方法有效地近似近端操作者。然后,我们开发了一种不精确的近端步骤的快速近端算法及其加速的变体。可以保证连续迭代之间的平方距离以O(1 / T)的速率收敛,其中T是迭代次数。此外,我们证明了所提出的算法可以并行化,并且所得算法实现了几乎线性的加速比。线程数。在基质完成和稳健的主成分分析方面进行了广泛的实验。观察到了比现有技术明显的加速。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号