...
首页> 外文期刊>Automatic Control, IEEE Transactions on >Push–Pull Gradient Methods for Distributed Optimization in Networks
【24h】

Push–Pull Gradient Methods for Distributed Optimization in Networks

机译:用于网络中分布式优化的推挽梯度方法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this article, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents’ cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider new distributed gradient-based methods where each node maintains two estimates, namely an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents’ objective functions. From the viewpoint of an agent, the information about the gradients is pushed to the neighbors, whereas the information about the decision variable is pulled from the neighbors, hence giving the name “push–pull gradient methods.” The methods utilize two different graphs for the information exchange among agents and, as such, unify the algorithms with different types of distributed architecture, including decentralized (peer to peer), centralized (master–slave), and semicentralized (leader–follower) architectures. We show that the proposed algorithms and their many variants converge linearly for strongly convex and smooth objective functions over a network (possibly with unidirectional data links) in both synchronous and asynchronous random-gossip settings. In particular, under the random-gossip setting, “push–pull” is the first class of algorithms for distributed optimization over directed graphs. Moreover, we numerically evaluate our proposed algorithms in both scenarios, and show that they outperform other existing linearly convergent schemes, especially for ill-conditioned problems and networks that are not well balanced.
机译:在本文中,我们专注于解决网络中的分布式凸优化问题,其中每个代理具有其自己的凸起成本函数,目标是在遵循网络连接结构的同时最小化代理的成本函数的总和。为了最小化成本函数的总和,我们考虑新的基于分布式的基于梯度的方法,其中每个节点维持两个估计,即最佳决策变量的估计,以及代理的客观函数的平均值的梯度的估计。从代理的角度来看,渐变的信息被推到邻居,而有关决策变量的信息被从邻居拉出,因此给出名称“推挽梯度方法”。该方法利用了两个不同的图形,用于代理之间的信息交换,因此,统一不同类型的分布式架构的算法,包括分散(对等体),集中(主从)和半型(领导者 - 追随者)架构。我们表明所提出的算法及其许多变体在同步和异步随机术语中的网络(可能是单向数据链路)中线性地集成线性地收敛,以便在网络(可能是单向数据链路)上的强凸和光滑的目标函数。特别地,在随机闲话设置下,“推挽”是针对定向图的分布式优化的第一类算法。此外,我们在两种情况下进行了数值评估了我们所提出的算法,并表明他们优于其他现有的线性收敛方案,特别是对于不良的问题和网络不良的网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号