首页> 外文学位 >Distributed Optimization: Algorithms and Convergence Rates.
【24h】

Distributed Optimization: Algorithms and Convergence Rates.

机译:分布式优化:算法和收敛速度。

获取原文
获取原文并翻译 | 示例

摘要

This thesis develops and analyzes distributed algorithms for convex optimization in networks, when nodes cooperatively minimize the sum of their locally known costs subject to a global variable of common interest. This setup encompasses very relevant applications in networked systems, including distributed estimation and source localization in sensor networks, and distributed learning. Generally, existing literature offers two types of distributed algorithms to solve the above problem: 1) distributed (consensus-based) gradient methods; and 2) distributed augmented Lagrangian methods; but both types present several limitations. 1) Distributed gradient-like methods have slow practical convergence rate; further, they are usually studied for very general, non-differentiable costs, and the possibilities for speed-ups on more structured functions are lot sufficiently explored. 2) Distributed augmented Lagrangian methods generally show good performance n practice, but there is a limited understanding of their convergence rates, specially how the rates depend in the underlying network.;This thesis contributes to both classes of algorithms in several ways. We propose a new class of fast distributed gradient algorithms that are Nesterov-like. We achieve this by exploiting the structure of convex, differeniable costs with Lipschitz continuous and bounded gradients. We establish their fast convergence rates in terms of the number of per-node communications, per-node gradient evaluations, and the network spectral gap. Furthermore, we show that current distributed gradient methods cannot achieve the rates of our methods under the same function classes. Our distributed Nesterov-like gradient algorithms achieve guaranteed rates for both static and random networks, including the scenario with intermittently failing inks or randomized communication protocols. With respect to distributed augmented Lagrangian methods, ye consider both deterministic and randomized distributed methods, subsuming known methods but also introducing novel algorithms. Assuming twice continuously differentiable costs with a bounded Hessian, ye establish global linear convergence rates, in terms of the number of per-node communications, and, unlike most of the existing work, in terms of the network spectral gap. We illustrate our methods with reveal applications in sensor networks and distributed learning.
机译:本文研究和分析了当节点共同服从受共同关注的全局变量影响的局部已知成本之和时,用于网络凸优化的分布式算法。此设置涵盖了网络系统中非常相关的应用程序,包括传感器网络中的分布式估计和源定位以及分布式学习。通常,现有文献提供两种类型的分布式算法来解决上述问题:1)分布式(基于共识)梯度方法; 2)分布式增强拉格朗日方法;但是两种类型都有一些局限性。 1)分布梯度式方法的实际收敛速度慢;此外,通常以非常普通的,不可区分的成本来研究它们,并且充分探讨了加快结构化功能的可能性。 2)分布式增强拉格朗日方法在实践中通常表现出良好的性能,但是对它们的收敛速度,特别是这些速度在底层网络中的依赖性了解有限。;本文对这两种算法都有不同的贡献。我们提出了一类类似于Nesterov的快速分布梯度算法。我们通过利用Lipschitz连续和有界渐变来利用凸的可区分成本的结构来实现这一目标。我们根据每个节点的通信数量,每个节点的梯度评估和网络频谱差距来确定它们的快速收敛速度。此外,我们证明了在相同的函数类下,当前的分布式梯度方法无法达到方法的速率。我们的类似Nesterov的分布式梯度算法可为静态和随机网络(包括间歇性故障墨水或随机通信协议的情况)实现保证的速率。关于分布式增强拉格朗日方法,你们既考虑确定性方法又考虑随机方法,既考虑了已知方法又引入了新颖的算法。假设使用有限的Hessian进行两次连续可区分的成本计算,则可以建立基于每个节点通信数量的全局线性收敛速率,并且与大多数现有工作不同,可以基于网络频谱间隙来建立全局线性收敛速率。我们通过揭示在传感器网络和分布式学习中的应用来说明我们的方法。

著录项

  • 作者

    Jakovetic, Dusan.;

  • 作者单位

    Carnegie Mellon University.;

  • 授予单位 Carnegie Mellon University.;
  • 学科 Engineering Electronics and Electrical.;Engineering Computer.
  • 学位 Ph.D.
  • 年度 2013
  • 页码 229 p.
  • 总页数 229
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号