首页> 外文会议>European Conference on Principles and Practice of Knowledge Discovery in Databases >Statistical Relational Learning: An Inductive Logic Programming Perspective
【24h】

Statistical Relational Learning: An Inductive Logic Programming Perspective

机译:统计关系学习:归纳逻辑编程视角

获取原文

摘要

In the past few years there has been a lot of work lying at the intersection of probability theory, logic programming and machine learning [14,18,13,9,6,1,11]. This work is known under the names of statistical relational learning [7,5], probabilistic logic learning [4], or probabilistic inductive logic programming. Whereas most of the existing works have started from a probabilistic learning perspective and extended probabilistic formalisms with relational aspects, I shall take a different perspective, in which I shall start from inductive logic programming and study how inductive logic programming formalisms, settings and techniques can be extended to deal with probabilistic issues. This tradition has already contributed a rich variety of valuable formalisms and techniques, including probabilistic Horn abduction by David Poole, PRISMs by Sato, stochastic logic programs by Muggleton[13] and Cussens[2], Bayesian logic programs[10,8] by Kersting and De Raedt, and Logical Hidden Markov Models[11].The main contribution of this talk is the introduction of three probabilistic inductive logic programming settings which are derived from the learning from entailment, from interpretations and from proofs settings of the field of inductive logic programming [3]. Each of these settings contributes different notions of probabilistic logic representations, examples and probability distributions. The first setting, probabilistic learning from entailment, is incorporated in the wellknown PRISM system [19] and Cussens’s Failure Adjusted Maximisation approach to parameter estimation in stochastic logic programs [2]. A novel system that was recently developed and that .ts this paradigm is the nFOIL system [12]. It combines key principles of the well-known inductive logic programming system FOIL [15] with the na?ve Bayes’ appraoch. In probabilistic learning from entailment, examples are ground facts that should be probabilistically entailed by the target logic program. The second setting, probabilistic learning from interpretations, is incorporated in Bayesian logic programs [10,8], which integrate Bayesian networks with logic programs. This setting is also adopted by [6]. Examples in this setting are Herbrand interpretations that should be a probabilistic model for the target theory. The third setting, learning from proofs [17], is novel. It is motivated by the learning of stochastic context free grammars from tree banks. In this setting, examples are proof trees that should be probabilistically provable from the unknown stochastic logic programs. The sketched settings (and their instances presented) are by no means the only possible settings for probabilisticinductive logic programming, but still -I hope - provide useful insights into the state-of-the-art of this exciting field.
机译:在过去的几年里,已经有很多的工作在说谎概率论,逻辑编程和机器学习[14,18,13,9,6,1,11]的交集。这项工作是统计关系学习[7,5],概率逻辑学习[4],或概率归纳逻辑程序的名称下已知的。虽然大多数现有的作品已经从概率学的角度开始和扩展与关系方面概率形式主义,我应采取不同的角度,在我所从归纳逻辑编程和研究启动逻辑编程形式主义,设置和技术如何感应可扩展处理概率问题。这个传统已经贡献了丰富多样的有价值的形式主义和技术,包括大卫·普尔,棱镜佐藤概率喇叭绑架的,由Muggleton [13]和Cussens [2],贝叶斯逻辑程序[10.8]通过Kersting随机逻辑程序和De Raedt和逻辑隐马尔可夫模型[11]本.The谈话的主要贡献是引入其由从蕴涵的学习获得的三个概率归纳逻辑程序设置,从解释和从样张归纳逻辑的字段的设置编程[3]。每个设置有助于概率逻辑表示,实施例和概率分布的不同的概念。第一设置,从蕴涵概率学习,在公知的棱镜系统[19]和Cussens的故障调整最大化在随机逻辑程序[2]的方法来参数估计被结合。这是最近开发和.TS这种范式一种新颖的系统是nFOIL系统[12]。它结合了公知的归纳逻辑编程系统FOIL [15]与NA的主要原则?已经贝叶斯appraoch。在从蕴涵概率学习,实例是应该由目标逻辑程序来概率entailed地事实。第二设置,从解释概率学习,在贝叶斯逻辑程序[10,8],其与逻辑程序整合贝叶斯网络被并入。此设置还由[6]通过。在此设置的例子是Herbrand解释,应该是目标理论的概率模型。第三设置,从样张学习[17],是新颖的。它是由随机上下文无关文法的从树上银行学习的动机。在这种背景下,例子证明树,应该是从未知的随机逻辑程序概率证明的。该草图设置(和他们的情况介绍)绝不是为probabilisticinductive逻辑编程,但仍希望-I唯一可能的设置 - 有效地洞察这一令人兴奋的领域的国家的最先进的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号