首页> 外文会议>IEEE Conference on Decision and Control >On the Exponential Convergence Rate of Proximal Gradient Flow Algorithms
【24h】

On the Exponential Convergence Rate of Proximal Gradient Flow Algorithms

机译:近梯度流算法的指数收敛速度

获取原文

摘要

Many modern large-scale and distributed optimization problems can be cast into a form in which the objective function is a sum of a smooth term and a nonsmooth regularizer. Such problems can be solved via a proximal gradient method which generalizes standard gradient descent to a nonsmooth setup. In this paper, we leverage the tools from control theory to study global convergence of proximal gradient flow algorithms. We utilize the fact that the proximal gradient algorithm can be interpreted as a variable-metric gradient method on the forward-backward envelope. This continuously differentiable function can be obtained from the augmented Lagrangian associated with the original nonsmooth problem and it enjoys a number of favorable properties. We prove that global exponential convergence can be achieved even in the absence of strong convexity. Moreover, for in-network optimization problems, we provide a distributed implementation of the gradient flow dynamics based on the proximal augmented Lagrangian and prove global exponential stability for strongly convex problems.
机译:可以将许多现代的大规模和分布式优化问题转换成一种形式,其中目标函数是一个光滑项和一个非光滑调节器的总和。可以通过将标准梯度下降推广到不平滑设置的近端梯度方法解决这些问题。在本文中,我们利用控制理论中的工具来研究近端梯度流算法的全局收敛性。我们利用这样的事实,即近端梯度算法可以解释为前后包络上的可变度量梯度方法。可以从与原始不光滑问题相关的增强拉格朗日函数中获得这种可连续微分的函数,并且它具有许多有利的特性。我们证明,即使没有强凸性,也可以实现全局指数收敛。此外,对于网络内优化问题,我们基于近端增强拉格朗日算法提供了梯度流动力学的分布式实现,并证明了强凸问题的全局指数稳定性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号