首页> 外文期刊>Information Theory, IEEE Transactions on >Information-Theoretic Lower Bounds for Distributed Function Computation
【24h】

Information-Theoretic Lower Bounds for Distributed Function Computation

机译:分布函数计算的信息理论下界

获取原文
获取原文并翻译 | 示例

摘要

We derive information-theoretic converses (i.e., lower bounds) for the minimum time required by any algorithm for distributed function computation over a network of point-to-point channels with finite capacity, where each node of the network initially has a random observation and aims to compute a common function of all observations to a given accuracy with a given confidence by exchanging messages with its neighbors. We obtain the lower bounds on computation time by examining the conditional mutual information between the actual function value and its estimate at an arbitrary node, given the observations in an arbitrary subset of nodes containing that node. The main contributions include the following. First, a lower bound on the conditional mutual information via so-called small ball probabilities, which captures the dependence of the computation time on the joint distribution of the observations at the nodes, the structure of the function, and the accuracy requirement. For linear functions, the small ball probability can be expressed by Lévy concentration functions of sums of independent random variables, for which tight estimates are available that lead to strict improvements over existing lower bounds on computation time. Second, an upper bound on the conditional mutual information via strong data processing inequalities, which complements and strengthens existing cutset-capacity upper bounds. Finally, a multi-cutset analysis that quantifies the loss (dissipation) of the information needed for computation as it flows across a succession of cutsets in the network. This analysis is based on reducing a general network to a line network with bidirectional links and self-links, and the results highlight the dependence of the computation time on the diameter of the network, a fundamental parameter that is missing from most of the existing lower bounds on computation time.
机译:我们得出具有有限容量的点对点通道网络上的分布式函数计算的任何算法所需的最短时间的信息论逆(即下界),其中网络的每个节点最初都有随机观测值,并且目的是通过与邻居交换消息,以给定的可信度计算所有观测值的共同函数。给定在包含该节点的节点的任意子集中的观察结果,我们通过检查实际函数值与其在任意节点处的估计之间的条件互信息来获得计算时间的下限。主要贡献包括以下内容。首先,通过所谓的小球概率来确定条件互信息的下界,该下界捕获了计算时间对节点处观测值的联合分布,函数的结构以及精度要求的依赖性。对于线性函数,小球概率可以通过独立随机变量和的Levy集中函数来表示,对此可以进行严格的估计,从而导致对现有计算时间下界的严格改进。第二,通过强大的数据处理不等式,条件互信息的上限,可以补充和增强现有的割集容量上限。最后,多割集分析可以量化计算所需信息在网络中流经一系列割集时的损失(耗散)。该分析基于将通用网络简化为具有双向链接和自链接的线型网络,并且结果突出了计算时间对网络直径的依赖性,而现有的大多数模型都缺少该基本参数计算时间的界限。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号