首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
【24h】

Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity

机译:不具有强凸性的凸凹鞍点问题的原始-对偶梯度方法的线性收敛性

获取原文
获取外文期刊封面目录资料

摘要

We consider the convex-concave saddle point problem $min_{x}max_{y} f(x)+y^op A x-g(y)$ where $f$ is smooth and convex and $g$ is smooth and strongly convex. We prove that if the coupling matrix $A$ has full column rank, the vanilla primal-dual gradient method can achieve linear convergence even if $f$ is not strongly convex. Our result generalizes previous work which either requires $f$ and $g$ to be quadratic functions or requires proximal mappings for both $f$ and $g$. We adopt a novel analysis technique that in each iteration uses a "ghost" update as a reference, and show that the iterates in the primal-dual gradient method converge to this "ghost" sequence. Using the same technique we further give an analysis for the primal-dual stochastic variance reduced gradient method for convex-concave saddle point problems with a finite-sum structure.
机译:我们考虑凸凹鞍点问题$ min_ {x} max_ {y} f(x)+ y ^ top A xg(y)$其中$ f $是光滑凸的,而$ g $是光滑且凸的强凸。我们证明,如果耦合矩阵$ A $具有完整的列秩,则即使$ f $不是强凸,香草原始对偶梯度方法也可以实现线性收敛。我们的结果概括了以前的工作,这些工作要么要求$ f $和$ g $是二次函数,要么需要对$ f $和$ g $进行近端映射。我们采用了一种新颖的分析技术,该技术在每次迭代中都使用“重影”更新作为参考,并显示原始对偶梯度方法中的迭代收敛到此“重影”序列。使用相同的技术,我们进一步分析了具有有限和结构的凸凹鞍点问题的原始-对偶随机方差减少梯度法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号