首页> 外文期刊>Expert systems with applications >KGEL: A novel end-to-end embedding learning framework for knowledge graph completion
【24h】

KGEL: A novel end-to-end embedding learning framework for knowledge graph completion

机译:KGEL:一个新的端到端嵌入学习框架,用于知识图形完成

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Knowledge graphs (KGs) have recently become increasingly popular due to the broad range of essential applications in various downstream tasks including intelligent search, personalized recommendations, intelligent financial data analytics, etc. During an automated construction of a KG, the knowledge facts from multiple knowledge sources are automatically extracted in the form of triples, and these observed triples are used to derive new unobserved triples for KG completion (also known as link prediction). State-of-the-art link prediction methods are known to be primarily KG embedding models, among which tensor factorization models have recently drawn much attention due to their scalability and expressive feature embeddings, and hence, perform well for link prediction. However, these embedding models consider each KG triple individually and fail to capture the useful information present in the neighborhood of a node. To this end, we propose a novel end-to-end KG embedding learning framework that consists of an encoder of a dual weighted graph convolutional network, and a decoder of a novel fully expressive tensor factorization model. The proposed encoder extends weighted graph convolutional network to generate two rich and high quality embedding vectors for each node by aggregating information from the neighboring nodes. The proposed decoder has a flexible and powerful tensor representation form of the Tensor Train decomposition that takes benefit of the two representations of each node in its embedding space to accurately model the KG triples. We also derive a bound on the size of the embeddings for full expressivity and show that our proposed tensor factorization model is fully expressive. Additionally, we show the relationship of our tensor factorization model to previous tensor factorization models. The experimental results show the effectiveness of the proposed framework that consistently marks performance gains over several previous models on recent standard link prediction datasets.
机译:由于各种下游任务中的广泛基本应用,包括智能搜索,个性化建议,智能金融数据分析等,最近的知识图(KGS)最近变得越来越受欢迎。在KG的自动建设期间,来自多种知识的知识事实源以三元脉的形式自动提取,并且这些观察到的三元组用于导出用于KG完成的新的未观察性三元组(也称为链路预测)。已知最先进的链路预测方法主要是KG嵌入模型,其中张量分解模型最近由于其可扩展性和富有呈现的特征嵌入而引起了很多关注,因此,表现良好的链路预测。然而,这些嵌入模型单独考虑每个kg三倍,并且无法捕获节点附近存在的有用信息。为此,我们提出了一种新的端到端KG嵌入学习框架,该框架由双加权图卷积网络的编码器和新颖的完全表达张量分解模型的解码器组成。所提出的编码器扩展了加权图形卷积网络,通过从相邻节点聚合信息来为每个节点生成两个丰富和高质量的嵌入矢量。所提出的解码器具有张敏列车分解的灵活且强大的张量表示形式,其利益于其嵌入空间中的每个节点的两个表示,以准确地模拟kg三元组。我们还在嵌入率的大小上获得了全面的效力,并表明我们所提出的张量分解模型是完全表达的。此外,我们表明我们的张量分解模型与先前的张量分解模型的关系。实验结果表明,拟议的框架的有效性在最近的标准链路预测数据集上一致地标记了几个以前的模型上的性能提升。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号