首页> 外文会议>International Joint Conference on Artificial Intelligence >Joint Posterior Revision of NLP Annotations via Ontological Knowledge
【24h】

Joint Posterior Revision of NLP Annotations via Ontological Knowledge

机译:通过本体知识联合对NLP注释的关节修订

获取原文

摘要

Different well-established NLP tasks contribute to elicit the semantics of entities mentioned in natural language text, such as Named Entity Recognition and Classification (NERC) and Entity Linking (EL). However, combining the outcomes of these tasks may result in NLP annotations—such as a NERC organization linked by EL to a person— that are unlikely or contradictory when interpreted in the light of common world knowledge about the entities these annotations refer to. We thus propose a general probabilistic model that explicitly captures the relations between multiple NLP annotations for an entity mention, the ontological entity classes implied by those annotations, and the background ontological knowledge those classes may be consistent with. We use the model to estimate the posterior probability of NLP annotations given their confidences (prior probabilities) and the ontological knowledge, and consequently revise the best annotation choice performed by the NLP tools. In a concrete scenario with two state-of-the-art tools for NERC and EL, we experimentally show on three reference datasets that for these tasks, the joint annotation revision performed by the model consistently improves on the original results of the tools.
机译:不同的行之有效的NLP任务有助于引起的自然语言文本中提到的实体,如命名实体识别和分类(NERC)和实体链接(EL)的语义。然而,组合这些任务的结果可能导致NLP注释 - 例如由EL联系的NERC组织到一个人 - 这在鉴于普通世界知识解释这些注释时,这不太可能或矛盾。因此,我们提出了一般的概率模型,该模型明确地捕获了那些注释所暗示的本体实体类和背景本体知识,这些类别可以一致地捕捉到实体的多个NLP注释之间的关系。我们使用模型来估计NLP注释的后验概率,因为它们的信心(现有概率)和本体知识,因此修改了NLP工具所执行的最佳注释选择。在一个具体的场景中,有两个用于NERC和EL的最先进的工具,我们在专程上显示了三个参考数据集,即用于这些任务,由模型执行的联合注释修订一致地提高了工具的原始结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号