首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks
【24h】

Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks

机译:梯度原始对偶算法收敛于网络非凸分布优化的二阶平稳解

获取原文
       

摘要

In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solutions (ss2) with probability one. This is the first result showing that primal-dual algorithm is capable of finding ss2 when only using first-order information; it also extends the existing results for first-order, but {primal-only} algorithms. An important implication of our result is that it also gives rise to the first global convergence result to the ss2, for two classes of unconstrained distributed non-convex learning problems over multi-agent networks.
机译:在这项工作中,我们研究了两种基于一阶原始对偶的算法,即梯度原始对偶算法(GPDA)和乘数的梯度交替方向方法(GADMM),以解决一类线性约束非凸优化问题。我们表明,通过对原始变量和对偶变量的随机初始化,两种算法都能够以概率1计算二阶平稳解(ss2)。这是第一个结果表明原始对偶算法仅在使用一阶信息时才能够找到ss2;它也扩展了一阶(但仅限于{primal-only})算法的现有结果。我们的结果的一个重要含义是,对于多智能体网络上的两类无约束分布的非凸学习问题,它也引起了ss2的第一个全局收敛结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号