首页> 外文期刊>Computer Science and Application >基于实体描述和关系图卷积神经网络的实体分 类研究
【24h】

基于实体描述和关系图卷积神经网络的实体分 类研究

机译:基于实体描述和关系图卷积神经网络的实体分 类研究

获取原文
       

摘要

随着大数据时代的来临,人工智能快速发展,知识图谱已经在垂直搜索和智能问答等领域中发挥着重要的作用。但是,即使是全世界上最大的知识图谱也仍然不完整,所以知识推理一直是知识图谱的研究热点之一。本文提出了一个融合实体描述和关系图卷积神经网络(R-GCN)的模型(DR-GCN),并将其应用于知识推理中的标准任务:实体分类(Entity Classification),即缺失实体属性的恢复。R-GCN是根据图卷积神经网络(GCN),专门针对实际知识图谱的高度多元关系数据特征而开发的一类图卷积神经网络。本文研究的DR-GCN模型,该模型充分利用了知识图谱中的关系类型、关系方向、实体自循环、实体描述等信息进行实体分类。本文对DR-GCN模型进行了彻底的评估,并与已建立的基线进行了比较。实验表明,DR-GCN模型的实验结果比现有的基线有所提高,在AIFB和BGS数据集上,DR-GCN模型的准确率分别比基线中准确率最高的G-GAT和RDF2Vec模型的96.19%和87.24%高0.24%和0.35%,验证了改进后的模型效果更佳。 With the advent of the era of big data and the rapid development of artificial intelligence, knowledge map has played an important role in vertical search, intelligent Q & A and other fields. However, even the largest knowledge map in the world is still incomplete, so knowledge reasoning has always been one of the research hot spots of knowledge graph. In this paper, we propose a model (DR-GCN) of neural network which integrates entity description and relation graph convolution (R-GCN), and apply it to the standard task of knowledge reasoning: entity classification, that is, the restoration of missing entity attributes. R-GCN is a kind of graph convolution neural network (GCN), which is specially developed according to the characteristics of highly multivariate relational data of practical knowledge graph. The DR-GCN model in this paper makes full use of the relationship type, relationship direction, entity self cycle, entity description and other information in the knowledge graph for entity classification. In this paper, the DR-GCN model is thoroughly evaluated and compared with the established baseline. Experiments show that the experimental results of DR-GCN model are better than that of the existing baseline. On the AIFB dataset, the accuracy of the DR-GCN model is 0.24% higher than the 96.19% of the G-GAT model with the highest accuracy in the base-line. On the BGS dataset, the accuracy of the DR-GCN model is 0.35% higher than the 87.24% of the RDF2Vec model with the highest accuracy in the baseline, which proves that the improved model is more effective.
机译:随着大数据时代的来临,人工智能快速发展,知识图谱已经在垂直搜索和智能问答等领域中发挥着重要的作用。但是,即使是全世界上最大的知识图谱也仍然不完整,所以知识推理一直是知识图谱的研究热点之一。本文提出了一个融合实体描述和关系图卷积神经网络(R-GCN)的模型(DR-GCN),并将其应用于知识推理中的标准任务:实体分类(Entity Classification),即缺失实体属性的恢复。R-GCN是根据图卷积神经网络(GCN),专门针对实际知识图谱的高度多元关系数据特征而开发的一类图卷积神经网络。本文研究的DR-GCN模型,该模型充分利用了知识图谱中的关系类型、关系方向、实体自循环、实体描述等信息进行实体分类。本文对DR-GCN模型进行了彻底的评估,并与已建立的基线进行了比较。实验表明,DR-GCN模型的实验结果比现有的基线有所提高,在AIFB和BGS数据集上,DR-GCN模型的准确率分别比基线中准确率最高的G-GAT和RDF2Vec模型的96.19%和87.24%高0.24%和0.35%,验证了改进后的模型效果更佳。 With the advent of the era of big data and the rapid development of artificial intelligence, knowledge map has played an important role in vertical search, intelligent Q & A and other fields. However, even the largest knowledge map in the world is still incomplete, so knowledge reasoning has always been one of the research hot spots of knowledge graph. In this paper, we propose a model (DR-GCN) of neural network which integrates entity description and relation graph convolution (R-GCN), and apply it to the standard task of knowledge reasoning: entity classification, that is, the restoration of missing entity attributes. R-GCN is a kind of graph convolution neural network (GCN), which is specially developed according to the characteristics of highly multivariate relational data of practical knowledge graph. The DR-GCN model in this paper makes full use of the relationship type, relationship direction, entity self cycle, entity description and other information in the knowledge graph for entity classification. In this paper, the DR-GCN model is thoroughly evaluated and compared with the established baseline. Experiments show that the experimental results of DR-GCN model are better than that of the existing baseline. On the AIFB dataset, the accuracy of the DR-GCN model is 0.24% higher than the 96.19% of the G-GAT model with the highest accuracy in the base-line. On the BGS dataset, the accuracy of the DR-GCN model is 0.35% higher than the 87.24% of the RDF2Vec model with the highest accuracy in the baseline, which proves that the improved model is more effective.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号