首页> 外文会议>Probabilistic graphical models >Learning Bayesian Network Structures When Discrete and Continuous Variables Are Present
【24h】

Learning Bayesian Network Structures When Discrete and Continuous Variables Are Present

机译:存在离散变量和连续变量时学习贝叶斯网络结构

获取原文
获取原文并翻译 | 示例

摘要

In any database, some fields are discrete and others continuous in each record. We consider learning Bayesian network structures when discrete and continuous variables are present. Thus far, most of the previous results assumed that all the variables are either discrete or continuous. We propose to compute a new Bayesian score for each subset of discrete and continuous variables, and to obtain a structure that maximizes the posterior probability given examples. We evaluate the proposed algorithm and make experiments to see that the error probability and Kullback-Leibler divergence diminish as n grows whereas the computation increases linearly in the logarithm of the number of bins in the histograms that approximate the density.
机译:在任何数据库中,某些字段在每个记录中都是离散的,而其他字段则是连续的。当存在离散和连续变量时,我们考虑学习贝叶斯网络结构。到目前为止,大多数以前的结果都假定所有变量都是离散的或连续的。我们建议为离散和连续变量的每个子集计算一个新的贝叶斯得分,并给出一个给出示例的后验概率最大化的结构。我们评估了提出的算法并进行实验,发现随着n的增加,错误概率和Kullback-Leibler散度减小,而直方图中近似密度的柱状图数目的对数则线性增加。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号