首页> 外文会议>Biennial conference of the Canadian Society for Computational Studies of Intelligence >A Hybrid Convergent Method for Learning Probabilistic Networks
【24h】

A Hybrid Convergent Method for Learning Probabilistic Networks

机译:一种用于学习概率网络的混合收敛方法

获取原文

摘要

During past few years, a variety of methos have been developed for learning probabilistic networks from data, among which the heuristic single link forward or backward scarches are widely adopted to reduce the search space. a major drawback of these search heuristics is that they can not guarantee to converge to the right networks even if a sufficinetly large data set is available. This motivates us to explore a new algorithm that will not suffer from this problem. In this paper, we first identify an asymptotic property of differnet score metrics, based on which we then present a hybrid learning method that can be proved to be asymptotically convergent. We show that the algorithm, when employing hte information criterion and the Bayesian metric, guarantee to converge in a very general way and is computtionally feasible. evaluation of the algorithm with simulated data is given to demosntrate the capability of hte algorithm.
机译:在过去几年中,已经开发了各种MethoS用于从数据学习概率网络,其中广泛采用启发式单向链路或向后幕布来减少搜索空间。这些搜索启发式的主要缺点是,即使可用的大量数据集可用,它们也无法保证融合到正确的网络。这使我们探讨了一个不会遭受这个问题的新算法。在本文中,我们首先识别不同的分数度量的渐近性,基于其中我们提出了一种混合学习方法,可以证明是渐近的收敛。我们展示了该算法,当采用HTE信息标准和贝叶斯市度量时,保证以非常一般的方式收敛,并且是可行的。通过模拟数据评估算法进行了解到HTE算法的能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号