首页> 外文期刊>Machine Learning >Lifted generative learning of Markov logic networks
【24h】

Lifted generative learning of Markov logic networks

机译:马尔可夫逻辑网络的生成式学习的提升

获取原文
获取原文并翻译 | 示例
           

摘要

Markov logic networks (MLNs) are a well-known statistical relational learning formalism that combines Markov networks with first-order logic. MLNs attach weights to formulas in first-order logic. Learning MLNs from data is a challenging task as it requires searching through the huge space of possible theories. Additionally, evaluating a theory's likelihood requires learning the weight of all formulas in the theory. This in turn requires performing probabilistic inference, which, in general, is intractable in MLNs. Lifted inference speeds up probabilistic inference by exploiting symmetries in a model. We explore how to use lifted inference when learning MLNs. Specifically, we investigate generative learning where the goal is to maximize the likelihood of the model given the data. First, we provide a generic algorithm for learning maximum likelihood weights that works with any exact lifted inference approach. In contrast, most existing approaches optimize approximate measures such as the pseudo-likelihood. Second, we provide a concrete parameter learning algorithm based on first-order knowledge compilation. Third, we propose a structure learning algorithm that learns liftable MLNs, which is the first MLN structure learning algorithm that exactly optimizes the likelihood of the model. Finally, we perform an empirical evaluation on three real-world datasets. Our parameter learning algorithm results in more accurate models than several competing approximate approaches. It learns more accurate models in terms of test-set log-likelihood as well as prediction tasks. Furthermore, our tractable learner outperforms intractable models on prediction tasks suggesting that liftable models are a powerful hypothesis space, which may be sufficient for many standard learning problems.
机译:马尔可夫逻辑网络(MLN)是一种著名的统计关系学习形式主义,将马尔可夫网络与一阶逻辑相结合。 MLN将权重附加到一阶逻辑中的公式。从数据中学习MLN是一项艰巨的任务,因为它需要搜索可能的理论的巨大空间。另外,评估理论的可能性要求学习理论中所有公式的权重。反过来,这需要执行概率推断,这通常在MLN中是很难处理的。提升的推理通过利用模型中的对称性来加快概率推理。我们探索在学习MLN时如何使用提升推理。具体来说,我们调查生成学习,其目标是在给定数据的情况下最大程度地提高模型的可能性。首先,我们提供了一种用于学习最大似然权重的通用算法,该算法可与任何精确的提升推理方法一起使用。相反,大多数现有方法优化了近似度量,例如伪似然。其次,我们提供了基于一阶知识汇编的具体参数学习算法。第三,我们提出了一种学习可提升MLN的结构学习算法,这是第一种精确优化模型似然性的MLN结构学习算法。最后,我们对三个现实世界的数据集进行了实证评估。与几种竞争性近似方法相比,我们的参数学习算法可生成更准确的模型。它从测试集对数可能性和预测任务的角度学习更准确的模型。此外,我们的易学性学习者在预测任务上胜过了顽固性模型,这表明可扩展性模型是强大的假设空间,可能足以应付许多标准学习问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号