...
首页> 外文期刊>Neural processing letters >Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion
【24h】

Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion

机译:暹罗预先培训的变压器编码器,用于知识库完成

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this paper, we aim at leveraging a Siamese textual encoder to efficiently and effectively tackle knowledge base completion problem. Traditional graph embedding-based methods straightforwardly learn the embeddings by considering a knowledge base's structure but are inherently vulnerable to the graph's sparsity or incompleteness issue. In contrast, previous textual encoding-based methods capture such structured knowledge from a semantic perspective and employ deep neural textual encoder to model graph triples in semantic space, but they fail to trade off the contextual features with model's efficiency. Therefore, in this paper we propose a Siamese textual encoder operating on each graph triple from the knowledge base, where the contextual features between a head/tail entity and a relation are well-captured to highlight relation-aware entity embedding while a Siamese structure is also adapted to avoid combinatorial explosion during inference. In the experiments, the proposed method reaches state-of-the-art or comparable performance on several link prediction datasets. Further analyses demonstrate that the proposed method is much more efficient than its baseline with similar evaluating results.
机译:在本文中,我们的目的是利用暹罗语文本编码器,以有效且有效地解决知识库完成问题。通过考虑知识库的结构,传统的图形嵌入式方法直接学习嵌入式,但本身容易受到图表的稀​​疏性或不完整问题。相比之下,基于文本编码的方法从语义角度捕获这种结构化知识,并使用深神经文本编码器来模拟语义空间中的图形三元,但它们无法兑现模型的效率的上下文特征。因此,在本文中,我们提出了一个暹罗文本编码器,从知识库上的每个曲线图中运行,其中,头/尾实体和关系之间的上下文特征是众所周知的,以突出显示在暹罗结构时嵌入的关系感知实体嵌入。还适用于避免在推理期间组合爆炸。在实验中,所提出的方法在几个链路预测数据集上达到最先进的或类似的性能。进一步分析表明,该方法比具有类似评价结果的基线更有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号