首页> 外文期刊>Quality Control, Transactions >A Novel Approach for Analyzing Entity Linking Between Words and Entities for a Knowledge Base Using an Attention-Based Bilinear Joint Learning and Weighted Summation Model
【24h】

A Novel Approach for Analyzing Entity Linking Between Words and Entities for a Knowledge Base Using an Attention-Based Bilinear Joint Learning and Weighted Summation Model

机译:一种新的方法,用于分析使用基于注意的双线性联合学习和加权求和模型的知识库词和实体之间的实体链接的方法

获取原文
获取原文并翻译 | 示例
       

摘要

Entity linking (EL) is a task about natural language that links mentions of entities in text to corresponding entities that are in a knowledge base. Potential applications include question-answering systems, information extraction and knowledge base population (KBP). The key to structuring an EL system that has high quality involves creating careful representations of words and entities. However, a hypothesis that whole words have the same weight in their context exists in most previous methods, which causes the meanings of words to be biased. In this paper, a novel approach to analyze entity linking between words and entities for a knowledge base using attention-based bilinear joint learning is proposed. First, the approach designs a novel encoding method to model entities and words in EL. The method learns words and entities in a joint way and uses an attention mechanism to obtain different importance values in the context. Second, the approach introduces a weighted summation method to form the textual context and introduces the method with same line of reasoning to model coherence to improve ranking the features. Finally, the approach employs a pairwise boosting regression tree (PBRT) to rank the candidate entities. During the ranking, the approach takes features constructed with a weighted summation model and conventional EL features as the input. Through experiments, it demonstrates that compared with other state-of-the-art methods, the proposed model learns embeddings efficiently and improves EL performance. Our approach achieves progressive results on CoNLL and TAC 2010 datasets.
机译:实体链接(EL)是关于基于自然语言的任务,该任务是在文本中链接到知识库中的相应实体的实体提到。潜在的应用包括问答系统,信息提取和知识库人口(KBP)。构建具有高质量的EL系统的关键涉及创造仔细的单词和实体表示。然而,在最先前的方法中存在整个单词在其上下文中具有相同权重的假设,这导致单词偏置的含义。在本文中,提出了一种利用基于注意力的双线性联合学习的知识库的文字和实体之间分析实体联系的新方法。首先,该方法设计一种新颖的编码方法来模拟EL中的实体和单词。该方法以共同方式学习单词和实体,并使用注意机制在上下文中获取不同的重要性值。其次,该方法介绍了一种加权求和方法来形成文本背景,并介绍具有相同推理的方法来模拟相干性,以提高排名。最后,该方法采用成对升压回归树(PBRT)来对候选实体进行排名。在排名期间,该方法采用具有加权求和模型和传统EL功能的特征作为输入。通过实验,它表明,与其他最先进的方法相比,所提出的模型有效地学习嵌入并提高EL性能。我们的方法在Conll和TAC数据集上实现了进步结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号