首页> 外文会议>5th workshop on automated knowledge base construction >Learning Knowledge Base Inference with Neural Theorem Provers
【24h】

Learning Knowledge Base Inference with Neural Theorem Provers

机译:用神经定理证明者学习知识库推理

获取原文
获取原文并翻译 | 示例

摘要

In this paper we present a proof-of-concept implementation of Neural Theorem Provers (NTPs), end-to-end differentiable counterparts of discrete theorem provers that perform first-order inference on vector representations of symbols using function-free, possibly parameterized, rules. As such, NTPs follow a long tradition of neural-symbolic approaches to automated knowledge base inference, but differ in that they are differentiable with respect to representations of symbols in a knowledge base and can thus learn representations of predicates, constants, as well as rules of predefined structure. Furthermore, they still allow us to incorporate domain-knowledge provided as rules. The NTP presented here is realized via a differentiable version of the backward chaining algorithm. It operates on substitution representations and is able to learn complex logical dependencies from training facts of small knowledge bases.
机译:在本文中,我们介绍了神经定理证明(NTP)的概念验证实现,这是离散定理证明的端对端可微对等物,它们使用无函数,可能参数化的符号对矢量表示进行一阶推理,规则。这样,NTP遵循了长期使用神经符号方法进行自动知识库推断的传统,但是不同之处在于NTP在知识库中的符号表示方面是可区分的,因此可以学习谓词,常量和规则的表示。预定义结构。此外,它们仍然允许我们合并作为规则提供的域知识。此处介绍的NTP是通过向后链接算法的可区分版本实现的。它基于替换表示进行操作,并且能够从小型知识库的训练事实中学习复杂的逻辑依存关系。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号