首页> 外文学位 >Distributed Optimization Algorithms in Large-Scale Directed Networks.
【24h】

Distributed Optimization Algorithms in Large-Scale Directed Networks.

机译:大规模定向网络中的分布式优化算法。

获取原文
获取原文并翻译 | 示例

摘要

In the interconnected world of today, distributed computation and optimization over large-scale multi-agents networks are ubiquitous. The applications can be found in various fields ranging from machine learning, signal processing, to computational finance. The increasing interest in distributed optimization is further motivated by the emergence of big data application. In one aspect, large datasets push centralized computation to the limit and the need for distributed algorithms arises quite naturally. On a similar note, transmitting the data collected in a distributed manner to a center is either too costly or violates privacy. In this thesis, we aim to solve the distributed optimization problem over multi-agent networks, where each agent has local private information and objective functions. The goal is to have agents collaborate with each other to optimize the sum of these local objective functions. Existing algorithms mostly deal with the corresponding problems under the assumption that the underlying multi-agent network is strongly-connected and undirected, i.e., if agent i sends information to agent j, then agent j also sends information to agent i. The contribution of this work lies in the relaxation of such assumptions on the network topology. In particular, we assume the communication between the agents is described by a directed graph. We mainly propose four algorithms, Directed-Distributed Subgradient Descent (D-DSD), Directed-Distributed Projection Subgradient (D-DPS), DEXTRA, and ADD-OPT. D-DSD and D-DPS focus on the case when the objective functions are convex, but not necessarily differentiable. Both of the proposed algorithms achieve convergence rates of O(ln k/[sq rt]k), where k is the number of iterations. D-DPS is the generalization of D-DSD from unconstrained cases to constrained cases. When the objective functions are relaxed to be smooth, i.e., they are convex as well as differentiable, we propose DEXTRA and ADD-OPT that accelerate the convergence to a linear rate O( tauk) for 0< tau <1. Moreover, ADD-OPT supports a wider and more realistic range of step-sizes than DEXTRA. All four algorithms achieves the best known rate of convergence for this class of problems under the appropriate assumptions.
机译:在当今的互连世界中,大规模的多智能体网络上的分布式计算和优化已无处不在。可以在从机器学习,信号处理到计算金融的各个领域中找到应用程序。大数据应用程序的出现进一步激发了人们对分布式优化的日益增长的兴趣。一方面,大型数据集将集​​中计算推到了极限,而分布式算法的需求自然而然地出现了。类似地,将以分布式方式收集的数据传输到中心可能太昂贵或侵犯了隐私。本文旨在解决多智能体网络上的分布式优化问题,其中每个智能体具有本地私有信息和目标功能。目标是使代理相互协作以优化这些局部目标函数的总和。现有算法大多在以下情况下处理相应的问题:底层的多智能体网络是强连接且无方向性的,即,如果智能体i向智能体j发送信息,那么智能体j也向智能体i发送信息。这项工作的贡献在于放宽了对网络拓扑结构的此类假设。特别是,我们假设代理之间的通信由有向图描述。我们主要提出四种算法,定向分布式次梯度下降(D-DSD),定向分布式投影次梯度(D-DPS),DEXTRA和ADD-OPT。 D-DSD和D-DPS专注于目标函数是凸的但不一定可微的情况。两种提出的算法均达到O(ln k / [sq rt] k)的收敛速度,其中k是迭代次数。 D-DPS是D-DSD从不受约束的情况到受约束的情况的概括。当目标函数被放松以使其平滑时,即它们是凸且可微的,我们建议使用DEXTRA和ADD-OPT将0

著录项

  • 作者

    Xi, Chenguang.;

  • 作者单位

    Tufts University.;

  • 授予单位 Tufts University.;
  • 学科 Mathematics.
  • 学位 Ph.D.
  • 年度 2017
  • 页码 174 p.
  • 总页数 174
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

  • 入库时间 2022-08-17 11:53:20

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号