首页> 外文会议>Workshop on Domain Adaptation for NLP >MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models
【24h】

MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models

机译:多国问答:检索问题应答模型的跨域评估

获取原文

摘要

Retrieval question answering (ReQA) is the task of retrieving a sentence-level answer to a question from an open corpus (Ahmad et al., 2019). This dataset paper presents MultiReQA, a new multi-domain ReQA evaluation suite composed of eight retrieval QA tasks drawn from publicly available QA datasets. We explore systematic retrieval based evaluation and transfer learning across domains over these datasets using a number of strong baselines including two supervised neural models, based on fine-tuning BERT and USE-QA models respectively, as well as a surprisingly effective information retrieval baseline, BM25. Five of these tasks contain both training and test data, while three contain test data only. Performing cross training on the live tasks with training data shows that while a general model covering all domains is achievable, the best performance is often obtained by training exclusively on in-domain data.
机译:检索问题回答(reqa)是检索句子级别答案的任务,从开放语料库中提出问题(Ahmad等,2019)。 此数据集文件显示多域,一个新的多域REQA评估套件,由来自公开可用的QA数据集绘制的八个检索QA任务组成。 我们在这些数据集中探讨了系统检索的基于基于域的评估和转移学习,这些数据集使用了一些强大的神经模型,包括两个监督的神经模型,分别是微调伯特和使用-QA模型,以及令人惊讶的有效信息检索基线,BM25 。 这些任务中的五种包含培训和测试数据,而三个仅包含测试数据。 通过培训数据执行对现场任务的交叉训练表明,虽然覆盖所有域的一般模型是可实现的,但通常通过专门训练在域数据上进行最佳性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号