首页> 外文期刊>ACM transactions on knowledge discovery from data >Fast Approximate Score Computation on Large-Scale Distributed Data for Learning Multinomial Bayesian Networks
【24h】

Fast Approximate Score Computation on Large-Scale Distributed Data for Learning Multinomial Bayesian Networks

机译:用于学习多项式贝叶斯网络的大规模分布式数据的快速近似分数计算

获取原文
获取原文并翻译 | 示例

摘要

In this article, we focus on the problem of learning a Bayesian network over distributed data stored in a commodity cluster. Specifically, we address the challenge of computing the scoring function over distributed data in an efficient and scalable manner, which is a fundamental task during learning. While exact score computation can be done using the MapReduce-style computation, our goal is to compute approximate scores much faster with probabilistic error bounds and in a scalable manner. We propose a novel approach, which is designed to achieve the following: (a) decentralized score computation using the principle of gossiping; (b) lower resource consumption via a probabilistic approach for maintaining scores using the properties of a Markov chain; and (c) effective distribution of tasks during score computation (on large datasets) by synergistically combining well-known hashing techniques. We conduct theoretical analysis of our approach in terms of convergence speed of the statistics required for score computation, and memory and network bandwidth consumption. We also discuss how our approach is capable of efficiently recomputing scores when new data are available. We conducted a comprehensive evaluation of our approach and compared with the MapReduce-style computation using datasets of different characteristics on a 16-node cluster. When theMapReduce-style computation provided exact statistics for score computation, it was nearly 10 times slower than our approach. Although it ran faster on randomly sampled datasets than on the entire datasets, it performed worse than our approach in terms of accuracy. Our approach achieved high accuracy (below 6% average relative error) in estimating the statistics for approximate score computation on all the tested datasets. In conclusion, it provides a feasible tradeoff between computation time and accuracy for fast approximate score computation on large-scale distributed data.
机译:在本文中,我们重点研究通过商品集群中存储的分布式数据学习贝叶斯网络的问题。具体来说,我们解决了以高效且可扩展的方式针对分布式数据计算评分函数的挑战,这是学习过程中的一项基本任务。尽管可以使用MapReduce样式的计算来完成精确的分数计算,但我们的目标是在概率误差范围内以可伸缩的方式更快地计算近似分数。我们提出了一种新颖的方法,旨在实现以下目标:(a)使用闲聊原则进行分散式分数计算; (b)通过概率方法利用马尔可夫链的性质来维持得分,从而降低资源消耗; (c)通过协同结合众所周知的哈希技术,在分数计算过程中(在大型数据集上)有效分配任务。我们根据分数计算所需的统计收敛速度以及内存和网络带宽消耗对我们的方法进行理论分析。我们还将讨论当有新数据可用时,我们的方法如何能够有效地重新计算分数。我们对我们的方法进行了全面评估,并使用16节点群集上具有不同特征的数据集与MapReduce样式的计算进行了比较。当MapReduce样式的计算为分数计算提供准确的统计信息时,它比我们的方法慢了近10倍。尽管它在随机采样的数据集上的运行速度比在整个数据集上的运行速度快,但在准确性方面却比我们的方法差。我们的方法在估计所有测试数据集的近似分数计算的统计量时,实现了较高的准确性(平均相对误差低于6%)。总之,它为大规模分布式数据的快速近似分数计算提供了计算时间和精度之间的可行折衷。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号