首页> 外文会议>Annual conference on Neural Information Processing Systems >On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization
【24h】

On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization

机译:迹范数正则化的近梯度方法的线性收敛性

获取原文

摘要

Motivated by various applications in machine learning, the problem of minimizing a convex smooth loss function with trace norm regularization has received much attention lately. Currently, a popular method for solving such problem is the proximal gradient method (PGM), which is known to have a sublinear rate of convergence. In this paper, we show that for a large class of loss functions, the convergence rate of the PGM is in fact linear. Our result is established without any strong convexity assumption on the loss function. A key ingredient in our proof is a new Lipschitzian error bound for the aforementioned trace norm-regularized problem, which may be of independent interest.
机译:由于在机器学习中的各种应用,利用跟踪范数正则化最小化凸平滑损失函数的问题最近受到了广泛的关注。当前,用于解决该问题的流行方法是近端梯度法(PGM),已知其具有亚线性收敛率。在本文中,我们表明,对于一大类损失函数,PGM的收敛速度实际上是线性的。我们的结果是建立在对损失函数没有任何强凸性假设的情况下。我们证明的一个关键因素是针对上述跟踪范数正则化问题的新的Lipschitzian误差界,这可能是与个人利益相关的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号