首页> 外文会议>Aisa-Pacific web and web-age information management joint conference on web and big data >Jointly Modeling Structural and Textual Representation for Knowledge Graph Completion in Zero-Shot Scenario
【24h】

Jointly Modeling Structural and Textual Representation for Knowledge Graph Completion in Zero-Shot Scenario

机译:零发场景中知识图完成的结构化和文本表示的联合建模

获取原文

摘要

Knowledge graph completion (KGC) aims at predicting missing information for knowledge graphs. Most methods rely on the structural information of entities in knowledge graphs (In-KG), thus they cannot handle KGC in zero-shot scenario that involves Out-of-KG entities, which are novel to existing knowledge graphs with only textual information. Though some methods represent KG with textual information, the correlations built between In-KG entities and Out-of-KG entities are still weak. In this paper, we propose a joint model that integrates structural information and textual information to characterize effective correlations between In-KG entities and Out-of-KG entities. Specifically, we construct a new structural feature space and build combination structural representations for entities through their most similar base entities. Meanwhile, we utilize bidirectional gated recurrent unit network to build textual representations for entities from their descriptions. Extensive experiments show that our models have good expansibility and outperform state-of-the-art methods on entity prediction and relation prediction.
机译:知识图完成(KGC)旨在预测知识图缺少的信息。大多数方法都依赖于知识图中的实体的结构信息(In-KG),因此它们无法处理涉及KG外实体的零射场景中的KGC,这对于仅具有文本信息的现有知识图而言是新颖的。尽管某些方法用文本信息表示KG,但In-KG实体和KG外的实体之间建立的相关性仍然很弱。在本文中,我们提出了一个联合模型,该模型整合了结构信息和文本信息,以表征In-KG实体与Out-of-KG实体之间的有效关联。具体来说,我们构建了一个新的结构特征空间,并通过它们最相似的基础实体为实体构建组合结构表示。同时,我们利用双向门控递归单元网络根据实体的描述为实体建立文本表示。大量的实验表明,我们的模型具有良好的可扩展性,并且在实体预测和关系预测方面优于最新技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号