首页> 外文会议>International Conference on Inductive Logic Programming >Stacked Structure Learning for Lifted Relational Neural Networks
【24h】

Stacked Structure Learning for Lifted Relational Neural Networks

机译:提升关系神经网络的堆积结构学习

获取原文

摘要

Lifted Relational Neural Networks (LRNNs) describe relational domains using weighted first-order rules which act as templates for constructing feed-forward neural networks. While previous work has shown that using LRNNs can lead to state-of-the-art results in various ILP tasks, these results depended on hand-crafted rules. In this paper, we extend the framework of LRNNs with structure learning, thus enabling a fully automated learning process. Similarly to many ILP methods, our structure learning algorithm proceeds in an iterative fashion by top-down searching through the hypothesis space of all possible Horn clauses, considering the predicates that occur in the training examples as well as invented soft concepts entailed by the best weighted rules found so far. In the experiments, we demonstrate the ability to automatically induce useful hierarchical soft concepts leading to deep LRNNs with a competitive predictive power.
机译:举起的关系神经网络(LRNNS)使用加权一阶规则描述关系域,该规则充当构建前馈神经网络的模板。虽然以前的工作表明,使用LRNN可以导致最先进的ILP任务,但这些结果取决于手工制作的规则。在本文中,我们将LRNN的框架扩展到结构学习,从而实现了全自动学习过程。与许多ILP方法类似,我们的结构学习算法通过自上而下来通过所有可能的喇叭条件的假设空间进行自上而下的来进行迭代方式,考虑到训练示例中发生的谓词以及由最佳加权所需的发明软件概念到目前为止发现的规则。在实验中,我们展示了自动诱导有用的等级软件概念的能力,导致具有竞争性预测力的深层LRNN。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号