首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Augmenting Neural Networks with First-order Logic
【24h】

Augmenting Neural Networks with First-order Logic

机译:用一阶逻辑增强神经网络

获取原文

摘要

Today, the dominant paradigm for training neural networks involves minimizing task loss on a large dataset. Using world knowledge to inform a model, and yet retain the ability to perform end-to-end training remains an open question. In this paper, we present a novel framework for introducing declarative knowledge to neural network architectures in order to guide training and prediction. Our framework systematically compiles logical statements into computation graphs that augment a neural network without extra learnable parameters or manual redesign. We evaluate our modeling strategy on three tasks: machine comprehension, natural language inference, and text chunking. Our experiments show that knowledge-augmented networks can strongly improve over baselines, especially in low-data regimes.
机译:如今,用于训练神经网络的主要范例涉及最大程度地减少大型数据集上的任务损失。利用世界知识为模型提供信息,但仍保留执行端到端训练的能力仍然是一个悬而未决的问题。在本文中,我们提出了一个新颖的框架,用于将声明性知识引入神经网络体系结构,以指导训练和预测。我们的框架将逻辑语句系统地编译为可增强神经网络的计算图,而无需额外的学习参数或手动重新设计。我们在以下三个任务上评估我们的建模策略:机器理解,自然语言推理和文本分块。我们的实验表明,知识增强型网络可以大大提高基线水平,尤其是在低数据状态下。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号