首页> 外文会议>International Joint Conference on Artificial Intelligence >Neural Collective Entity Linking Based on Recurrent Random Walk Network Learning
【24h】

Neural Collective Entity Linking Based on Recurrent Random Walk Network Learning

机译:基于经常性随机步行网络学习的神经集体实体联系

获取原文

摘要

Benefiting from the excellent ability of neural networks on learning semantic representations, existing studies for entity linking (EL) have resorted to neural networks to exploit both the local mention-to-entity compatibility and the global interdependence between different EL decisions for target entity disambiguation. However, most neural collective EL methods depend entirely upon neural networks to automatically model the semantic dependencies between different EL decisions, which lack of the guidance from external knowledge. In this paper, we propose a novel end-to-end neural network with recurrent random-walk layers for collective EL, which introduces external knowledge to model the semantic interdependence between different EL decisions. Specifically, we first establish a model based on local context features, and then stack random-walk layers to reinforce the evidence for related EL decisions into high-probability decisions, where the semantic interdependence between candidate entities is mainly induced from an external knowledge base. Finally, a semantic regularizer that preserves the collective EL decisions consistency is incorporated into the conventional objective function, so that the external knowledge base can be fully exploited in collective EL decisions. Experimental results and in-depth analysis on various datasets show that our model achieves better performance than other state-of-the-art models. Our code and data are released at https://github.com/DeepLearnXMU/RRWEL.
机译:受益于神经网络对学习语义表示的优异能力,实体链接的现有研究(EL)已经采取了神经网络,以利用当地提及到实体兼容性和不同EL决策之间的全球相互依存,以实现目标实体歧义。然而,大多数神经组织的EL方法完全取决于神经网络,以自动模拟不同的EL决策之间的语义依赖性,这缺乏来自外部知识的引导。在本文中,我们提出了一种新的端到端神经网络,其具有用于集体EL的反复随机步行层,这引入了外部知识来模拟不同EL决策之间的语义相互依赖。具体而言,我们首先基于本地上下文特征建立模型,然后堆叠随机步道,以加强相关的EL决策的证据,以实现高概率决策,其中候选实体之间的语义相互依存主要来自外部知识库。最后,它保留了集体EL决定一致性语义正则掺入常规目标函数,使得外部知识库可以在集体EL决定得到充分的发挥。各个数据集的实验结果和深入分析表明,我们的模型比其他最先进的模型实现了更好的性能。我们的代码和数据在https://github.com/deeplearnxmu/rrwel发布。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号