【24h】

COMMONSENSEQA: A Question Answering Challenge Targeting Commonsense Knowledge

机译:常见问题:针对常识的问答题

获取原文

摘要

When answering a question, people often draw upon their rich world knowledge in addition to the particular context. Recent work has focused primarily on answering questions given some relevant document or context, and required very little general background. To investigate question answering with prior knowledge, we present COMMONSENSEQA: a challenging new dataset for commonsense question answering. To capture common sense beyond associations, we extract from CON-CEPTNET (Speer et al., 2017) multiple target concepts that have the same semantic relation to a single source concept. Crowd-workers are asked to author multiple-choice questions that mention the source concept and discriminate in turn between each of the target concepts. This encourages workers to create questions with complex semantics that often require prior knowledge. We create 12,247 questions through this procedure and demonstrate the difliculty of our task with a large number of strong baselines. Our best baseline is based on BERT-large (Devlin et al.. 2018) and obtains 56% accuracy, well below human performance, which is 89%.
机译:人们在回答问题时,除了特定的上下文外,还经常利用他们丰富的世界知识。最近的工作主要集中在回答给出一些相关文档或上下文的问题,并且只需要很少的一般背景。为了调查具有先验知识的问题解答,我们提出了COMMONSENSEQA:常识性问题解答的一个具有挑战性的新数据集。为了捕捉关联之外的常识,我们从CON-CEPTNET(Speer等人,2017)中提取了多个目标概念,这些目标概念与单个源概念具有相同的语义关系。要求人群工作者提出多项选择题,提及源概念,并依次区分每个目标概念。这鼓励工作人员提出具有复杂语义的问题,而这些问题通常需要先验知识。通过此过程,我们创建了12,247个问题,并以大量可靠的基准证明了我们任务的难度。我们的最佳基准是基于BERT-large(Devlin等人.2018),并获得56%的准确度,远低于人类的89%的表现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号