首页> 外文期刊>IEEE Transactions on Signal Processing >Asynchronous Distributed ADMM for Large-Scale Optimization—Part I: Algorithm and Convergence Analysis
【24h】

Asynchronous Distributed ADMM for Large-Scale Optimization—Part I: Algorithm and Convergence Analysis

机译:大规模优化的异步分布式ADMM —第一部分:算法和<?Pub _newline?>收敛性分析

获取原文
获取原文并翻译 | 示例
           

摘要

Aiming at solving large-scale optimization problems, this paper studies distributed optimization methods based on the alternating direction method of multipliers (ADMM). By formulating the optimization problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology. However, traditional synchronized computation does not scale well with the problem size, as the speed of the algorithm is limited by the slowest workers. This is particularly true in a heterogeneous network where the computing nodes experience different computation and communication delays. In this paper, we propose an asynchronous distributed ADMM (AD-ADMM), which can effectively improve the time efficiency of distributed optimization. Our main interest lies in analyzing the convergence conditions of the AD-ADMM, under the popular partially asynchronous model, which is defined based on a maximum tolerable delay of the network. Specifically, by considering general and possibly non-convex cost functions, we show that the AD-ADMM is guaranteed to converge to the set of Karush–Kuhn–Tucker (KKT) points as long as the algorithm parameters are chosen appropriately according to the network delay. We further illustrate that the asynchrony of the ADMM has to be handled with care, as slightly modifying the implementation of the AD-ADMM can jeopardize the algorithm convergence, even under the standard convex setting.
机译:为了解决大规模优化问题,本文研究了基于乘数交替方向法(ADMM)的分布式优化方法。通过将优化问题表述为共识问题,ADMM可用于在具有星形拓扑的计算机网络上以完全并行的方式解决共识问题。但是,传统的同步计算无法很好地解决问题的规模,因为算法的速度受到最慢工作人员的限制。这在异构网络中尤其如此,在异构网络中,计算节点会经历不同的计算和通信延迟。本文提出了一种异步分布式ADMM(AD-ADMM),可以有效提高分布式优化的时间效率。我们的主要兴趣在于根据流行的部分异步模型分析AD-ADMM的收敛条件,该模型是基于网络的最大可容忍延迟定义的。具体来说,通过考虑一般的和可能是非凸的成本函数,我们证明,只要根据网络适当选择算法参数,就可以确保AD-ADMM收敛到Karush-Kuhn-Tucker(KKT)点集延迟。我们进一步说明,必须谨慎处理ADMM的异步性,因为即使在标准凸设置下,稍微修改AD-ADMM的实现也可能会危害算法收敛。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号