首页> 外文会议>European conference on machine learning and knowledge discovery in databases >Combining Subjective Probabilities and Data in Training Markov Logic Networks
【24h】

Combining Subjective Probabilities and Data in Training Markov Logic Networks

机译:在训练马尔可夫逻辑网络中结合主观概率和数据

获取原文

摘要

Markov logic is a rich language that allows one to specify a knowledge base as a set of weighted first-order logic formulas, and to define a probability distribution over truth assignments to ground atoms using this knowledge base. Usually, the weight of a formula cannot be related to the probability of the formula without taking into account the weights of the other formulas. In general, this is not an issue, since the weights are learned from training data. However, in many domains (e.g. healthcare, dependable systems, etc.), only little or no training data may be available, but one has access to a domain expert whose knowledge is available in the form of subjective probabilities. Within the framework of Bayesian statistics, we present a formalism for using a domain expert's knowledge for weight learning. Our approach defines priors that are different from and more general than previously used Gaussian priors over weights. We show how one can learn weights in an MLN by combining subjective probabilities and training data, without requiring that the domain expert provides consistent knowledge. Additionally, we also provide a formalism for capturing conditional subjective probabilities, which are often easier to obtain and more reliable than non-conditional probabilities. We demonstrate the effectiveness of our approach by extensive experiments in a domain that models failure dependencies in a cyber-physical system. Moreover, we demonstrate the advantages of using our proposed prior over that of using non-zero mean Gaussian priors in a commonly cited social network MLN testbed.
机译:马尔可夫逻辑是一种丰富的语言,它使人们可以将一个知识库指定为一组加权的一阶逻辑公式,并使用该知识库来定义对地面原子的真值分配的概率分布。通常,如果不考虑其他公式的权重,则公式的权重就无法与公式的概率相关。通常,这不是问题,因为可以从训练数据中学习权重。但是,在许多领域(例如医疗保健,可靠的系统等)中,可能只有很少或没有可用的培训数据,但是可以访问领域专家,其知识可以主观概率的形式获得。在贝叶斯统计的框架内,我们提出了使用领域专家的知识进行权重学习的形式主义。我们的方法定义的先验不同于先前使用的高斯先验权重。我们展示了如何通过结合主观概率和训练数据来学习MLN中的权重,而无需领域专家提供一致的知识。此外,我们还提供了一种形式化的方法来捕获有条件的主观概率,与无条件的概率相比,这些主观的概率通常更容易获得且更可靠。通过在模拟物理网络系统中的故障依存关系的领域中进行广泛的实验,我们证明了该方法的有效性。此外,我们证明了在普遍引用的社交网络MLN测试平台中,使用建议的先验方法比使用非零均值高斯先验方法的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号