首页> 外文会议>IEEE International Conference on Data Science in Cyberspace >Incorporating Entity Type Information into Knowledge Representation Learning
【24h】

Incorporating Entity Type Information into Knowledge Representation Learning

机译:将实体类型信息纳入知识表示学习

获取原文

摘要

Knowledge Representation Learning (KRL), which is also known as Knowledge Embedding, is a very useful method to represent complex relations in knowledge graphs. The low-dimensional representation learned by KRL models makes a contribution to many tasks like recommender system and question answering. Recently, many KRL models are trained using square loss or cross entropy loss based on Closed World Assumption (CWA). Although CWA is an easy way for training, it violates the link prediction task which exploits KRL. To overcome the drawback, in this paper, we introduce a new method, Type-based Prior Possibility Assumption (TPPA). TPPA calculates type based prior possibilities for missing triplets instead of zeros in the training process of KRL to weaken the bad influence of CWA. We compare TPPA with the baseline method CWA in ConvE and TuckER, two common frameworks for knowledge representation learning. The experiment results on FB15k-237 dataset show that TPPA based training method outperforms CWA based training method in link prediction task.
机译:知识表示学习(KRL),也称为知识嵌入,是一种非常有用的表示知识图中复杂关系的方法。 KRL模型学习到的低维表示法对许多任务(例如推荐系统和问题解答)做出了贡献。最近,许多KRL模型是基于封闭世界假设(CWA)使用平方损失或交叉熵损失进行训练的。尽管CWA是一种简单的培训方法,但它违反了利用KRL的链接预测任务。为了克服该缺点,在本文中,我们介绍了一种新的方法,即基于类型的先验可能性假设(TPPA)。 TPPA在KRL的训练过程中基于类型的先验概率来计算三元组缺失的可能性,而不是零,以减弱CWA的不良影响。我们将TPPA与ConvE和TuckER(用于知识表示学习的两个常见框架)中的基线方法CWA进行比较。在FB15k-237数据集上的实验结果表明,在链接预测任务中,基于TPPA的训练方法优于基于CWA的训练方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号