首页> 外文期刊>Automatica >Distributed optimization over directed graphs with row stochasticity and constraint regularity
【24h】

Distributed optimization over directed graphs with row stochasticity and constraint regularity

机译:以行瞬失和约束规律性的定向图分布式优化

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

This paper deals with an optimization problem over a network of agents, where the cost function is the sum of the individual (possibly nonsmooth) objectives of the agents and the constraint set is the intersection of local constraints. Most existing methods employing subgradient and consensus steps for solving this problem require the weight matrix associated with the network to be column stochastic or even doubly stochastic, conditions that can be hard to arrange in directed networks. Moreover, known convergence analyses for distributed subgradient methods vary depending on whether the problem is unconstrained or constrained, and whether the local constraint sets are identical or nonidentical and compact. The main goals of this paper are: (i) removing the common column stochasticity requirement; (ii) relaxing the compactness assumption, and (iii) providing a unified convergence analysis. Specifically, assuming the communication graph to be fixed and strongly connected and the weight matrix to (only) be row stochastic, a distributed projected subgradient algorithm and a variation of this algorithm are presented to solve the problem for cost functions that are convex and Lipschitz continuous. The key component of the algorithms is to adjust the subgradient of each agent by an estimate of its corresponding entry in the normalized left Perron eigenvector of the weight matrix. These estimates are obtained locally from an augmented consensus iteration using the same row stochastic weight matrix and requiring very limited global information about the network. Moreover, based on a regularity assumption on the local constraint sets, a unified analysis is given that can be applied to both unconstrained and constrained problems and without assuming compactness of the constraint sets or an interior point in their intersection. Further, we also establish an upper bound on the absolute objective error evaluated at each agent's available local estimate under a nonincreasing step size sequence. This bound allows us to analyze the convergence rate of both algorithms. (C) 2019 Elsevier Ltd. All rights reserved.
机译:本文对代理网络的优化问题进行了优化问题,其中成本函数是代理的个人(可能的非表面)的总和,并且约束集是局部约束的交叉点。采用子射泽度和共识步骤的大多数现有方法要求该问题需要与网络相关联的权重矩阵是柱随机甚至是双随机的,条件可能很难布置在定向网络中。此外,用于分布式子辐射方法的已知收敛分析根据问题是不受约束的或约束的,并且局部约束集是相同还是非常紧凑的。本文的主要目标是:(i)去除公共列随机性要求; (ii)放松紧凑性假设,(iii)提供统一的收敛分析。具体地,假设要固定的通信曲线图和强烈连接并且重量矩阵到(仅限)是行随机的行,提出了分布式投影的子辐射算法和该算法的变型以解决凸面和嘴唇尖端的成本函数的问题。算法的关键组件是通过在权重矩阵的归一化左珀罗·特征向量vector中的其对应条目来调整每个代理的子镜头。这些估计是通过使用相同的行随机权重矩阵的增强共识迭代本地获得,并且需要非常有限的网络的全局信息。此外,基于局部约束集的规律假设,给出了统一的分析,其可以应用于无约束和约束问题,并且不假设约束集的紧凑性或者交叉口中的内部点。此外,我们还在无释放步骤尺寸序列下建立在每个试剂的可用本地估计下评估的绝对目标误差的上限。这界允许我们分析这两种算法的收敛速率。 (c)2019年elestvier有限公司保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号