首页> 外文期刊>Signal Processing, IEEE Transactions on >Optimization and Analysis of Distributed Averaging With Short Node Memory
【24h】

Optimization and Analysis of Distributed Averaging With Short Node Memory

机译:短节点内存的分布式平均优化与分析

获取原文
获取外文期刊封面目录资料

摘要

Distributed averaging describes a class of network algorithms for the decentralized computation of aggregate statistics. Initially, each node has a scalar data value, and the goal is to compute the average of these values at every node (the so-called average consensus problem). Nodes iteratively exchange information with their neighbors and perform local updates until the value at every node converges to the initial network average. Much previous work has focused on algorithms where each node maintains and updates a single value; every time an update is performed, the previous value is forgotten. Convergence to the average consensus is achieved asymptotically. The convergence rate is fundamentally limited by network connectivity, and it can be prohibitively slow on topologies such as grids and random geometric graphs, even if the update rules are optimized. In this paper, we provide the first theoretical demonstration that adding a local prediction component to the update rule can significantly improve the convergence rate of distributed averaging algorithms. We focus on the case where the local predictor is a linear combination of the node's current and previous values (i.e., two memory taps), and our update rule computes a combination of the predictor and the usual weighted linear combination of values received from neighboring nodes. We derive the optimal mixing parameter for combining the predictor with the neighbors' values, and conduct a theoretical analysis of the improvement in convergence rate that can be achieved using this acceleration methodology. For a chain topology on $N$ nodes, this leads to a factor of $N$ improvement over standard consensus, and for a two-dimensional grid, our approach achieves a factor of $sqrt{N}$ improvement.
机译:分布式平均描述了用于聚合统计信息的分散计算的一类网络算法。最初,每个节点都有一个标量数据值,目标是在每个节点上计算这些值的平均值(所谓的平均共识问题)。节点反复与其邻居交换信息并执行本地更新,直到每个节点的值收敛到初始网络平均值为止。先前的很多工作都集中在算法上,其中每个节点都维护和更新一个值。每次执行更新时,先前的值都会被忘记。渐近地达到平均共识的收敛。收敛速度从根本上受到网络连接的限制,即使在优化更新规则的情况下,收敛速度也可能在诸如网格和随机几何图这样的拓扑结构上过慢。在本文中,我们提供了第一个理论证明,即在更新规则中添加局部预测组件可以显着提高分布式平均算法的收敛速度。我们关注本地预测变量是节点当前值和先前值(即两个内存抽头)的线性组合,而我们的更新规则将计算预测变量和从相邻节点接收的值的通常加权线性组合的组合。我们导出了将预测变量与邻居的值组合在一起的最佳混合参数,并对使用该加速方法可以实现的收敛速度提高进行了理论分析。对于$ N $节点上的链拓扑,这导致比标准共识提高$ N $的系数,对于二维网格,我们的方法实现了$ sqrt {N} $的改善。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号