首页> 外文会议>Workshop on Commonsense inference in natural language processing >Jeff Da at COIN - Shared Task: BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge
【24h】

Jeff Da at COIN - Shared Task: BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge

机译:COIN的Jeff Da-共享任务:大心情:将变压器与明确的常识相关

获取原文

摘要

We introduce a simple yet effective method of integrating contextual embeddings with commonsense graph embeddings, dubbed BERT Infused Graphs: Matching Over Other em-beDdings. First, we introduce a preprocessing method to improve the speed of querying knowledge bases. Then, we develop a method of creating knowledge embeddings from each knowledge base. We introduce a method of aligning tokens between two misaligned tok-enization methods. Finally, we contribute a method of contextualizing BERT after combining with knowledge base embeddings. We also show BERTs tendency to correct lower accuracy question types. Our model achieves a higher accuracy than BERT, and we score fifth on the official leaderboard of the shared task and score the highest without any additional language model pretraining.
机译:我们介绍了一种将上下文嵌入与常识图嵌入集成的简单有效的方法,称为BERT注入图:与其他嵌入匹配。首先,我们介绍一种预处理方法,以提高查询知识库的速度。然后,我们开发一种从每个知识库创建知识嵌入的方法。我们介绍了一种在两种未对齐的Tok-enization方法之间对齐令牌的方法。最后,在结合知识库嵌入之后,我们提供了一种将BERT上下文化的方法。我们还显示BERT倾向于纠正较低准确度的问题类型。我们的模型比BERT获得更高的准确性,并且在没有进行任何其他语言模型预训练的情况下,我们在共享任务的官方排行榜上得分第五,并且得分最高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号