首页> 外文会议>Canadian Conference on Artificial Intelligence >Attending Knowledge Facts with BERT-like Models in Question-Answering: Disappointing Results and Some Explanations
【24h】

Attending Knowledge Facts with BERT-like Models in Question-Answering: Disappointing Results and Some Explanations

机译:在问题解答中使用类似BERT的模型参加知识事实:令人失望的结果和一些解释

获取原文

摘要

Since the first appearance of BERT, pretrained BERT inspired models (XLNet, Roberta, ...) have delivered state-of-the-art results in a large number of Natural Language Processing tasks. This includes question-answering where previous models performed relatively poorly particularly on datasets with a limited amount of data. In this paper we perform experiments with BERT on two such datasets that are OpenBookQA and ARC. Our aim is to understand why, in our experiments, using BERT sentence representations inside an attention mechanism on a set of facts tends to give poor results. We demonstrate that in some cases, the sentence representations proposed by BERT are limited in terms of semantic and that BERT often answers the questions in a meaningless way.
机译:自从BERT首次出现以来,经过预训练的BERT启发模型(XLNet,Roberta等)在许多自然语言处理任务中均提供了最先进的结果。这包括问题解答,其中以前的模型表现相对较差,尤其是在数据量有限的数据集上。在本文中,我们使用BERT对两个这样的数据集进行了实验,这两个数据集分别是OpenBookQA和ARC。我们的目的是要理解为什么在我们的实验中,针对一组事实在关注机制内使用BERT句子表示往往会得出较差的结果。我们证明,在某些情况下,BERT提出的句子表示在语义方面受到限制,并且BERT经常以无意义的方式回答问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号