首页> 外文期刊>Signal and Information Processing over Networks, IEEE Transactions on >Stochastic Optimization From Distributed Streaming Data in Rate-Limited Networks
【24h】

Stochastic Optimization From Distributed Streaming Data in Rate-Limited Networks

机译:限速网络中分布式流数据的随机优化

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Motivated by machine learning applications in networks of sensors, Internet-of-Things devices, and autonomous agents, we propose techniques for distributed stochastic convex learning from high-rate data streams. The setup involves a network of nodes-each one of which has a stream of data arriving at a constant rate-that solve a stochastic convex optimization problem by collaborating with each other over rate-limited communication links. To this end, we present and analyze two algorithms-termed distributed stochastic approximation mirror descent (D-SAMD) and accelerated distributed stochastic approximation mirror descent (AD-SAMD)-that are based on two stochastic variants of mirror descent and in which nodes collaborate via approximate averaging of the local noisy subgradients using distributed consensus. Our main contributions are:1) bounds on the convergence rates of D-SAMD and AD-SAMD in terms of the number of nodes, network topology, and ratio of the data streaming and communication rates; and 2) sufficient conditions for order-optimum convergence of these algorithms. In particular, we show that for sufficiently well-connected networks, distributed learning schemes can obtain order-optimum convergence even if the communications rate is small. Furthermore, we find that the use of accelerated methods significantly enlarges the regime, in which order-optimum convergence is achieved; this is in contrast to the centralized setting, where accelerated methods usually offer only a modest improvement. Finally, we demonstrate the effectiveness of the proposed algorithms using numerical experiments.
机译:受传感器,物联网设备和自治代理网络中机器学习应用的推动,我们提出了从高速数据流中进行分布式随机凸学习的技术。该设置涉及节点网络-每个节点都有一个以恒定速率到达的数据流-通过在速率受限的通信链路上相互协作来解决随机凸优化问题。为此,我们提出并分析了两种算法,分别是分布式随机逼近镜下降(D-SAMD)和加速分布式随机逼近镜下降(AD-SAMD),它们基于镜像下降的两个随机变体,并且节点协同工作通过使用分布式共识对本地噪声子梯度进行近似平均。我们的主要贡献是:1)D-SAMD和AD-SAMD的收敛速率在节点数,网络拓扑以及数据流和通信速率的比率方面受到限制; 2)这些算法的阶数最优收敛的充分条件。尤其是,我们表明,对于连接良好的网络,即使通信速率很小,分布式学习方案也可以获得阶次最优收敛。此外,我们发现加速方法的使用显着扩大了实现阶次最优收敛的机制。这与集中式设置相反,在集中式设置中,加速方法通常仅提供适度的改进。最后,我们通过数值实验证明了所提出算法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号