首页> 外文会议>International joint conference on artificial intelligence >Speeding Up Inference in Markov Logic Networks by Preprocessing to Reduce the Size of the Resulting Grounded Network
【24h】

Speeding Up Inference in Markov Logic Networks by Preprocessing to Reduce the Size of the Resulting Grounded Network

机译:通过预处理来减少所产生的接地网络的大小,在马尔可夫逻辑网络中加速推断

获取原文

摘要

Statistical-relational reasoning has received much attention due to its ability to robustly model complex relationships. A key challenge is tractable inference, especially in domains involving many objects, due to the combinatorics involved. One can accelerate inference by using approximation techniques, "lazy" algorithms, etc. We consider Markov Logic Networks (MLNs), which involve counting how often logical formulae are satisfied. We propose a preprocessing algorithm that can substantially reduce the effective size of MLNs by rapidly counting how often the evidence satisfies each formula, regardless of the truth values of the query literals. This is a general preprocessing method that loses no information and can be used for any MLN inference algorithm. We evaluate our algorithm empirically in three real-world domains, greatly reducing the work needed during subsequent inference. Such reduction might even allow exact inference to be performed when sampling methods would be otherwise necessary.
机译:统计关系的推理受到很多关注,因为它能够稳健模型的复杂关系。一个关键挑战是听话的推理,尤其是在涉及许多对象,由于涉及到组合数学领域。人们可以通过使用近似技术,“懒惰”算法等加速推断,我们认为马尔可夫逻辑网络(MLNS),其中涉及计数的逻辑公式多久满足。我们提出了一种预处理算法,可以显着每个公式计算迅速证据满足多久,减少MLNS的有效尺寸,不管查询文字的真值。这是不损失信息,并且可以被用于任何MLN推理算法的通用预处理方法。我们评估我们的三个现实世界的经验域算法,大大降低了后续的推理过程中所需的工作。这种减少甚至可能允许在抽样方法是必要的,否则要进行精确推断。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号