首页> 外文会议>International workshop on semantic evaluation;Annual meeting of the Association for Computational Linguistics >NLM_NIH at SemEval-2017 Task 3: from Question Entailment to Question Similarity for Community Question Answering
【24h】

NLM_NIH at SemEval-2017 Task 3: from Question Entailment to Question Similarity for Community Question Answering

机译:NLM_NIH在SemEval-2017任务3:从问题提出到社区问题回答的问题相似性

获取原文

摘要

This paper describes our participation in SemEval-2017 Task 3 on Community Question Answering (cQA). The Question Similarity subtask (B) aims to rank a set of related questions retrieved by a search engine according to their similarity to the original question. We adapted our feature-based system for Recognizing Question Entailment (RQE) to the question similarity task. Tested on cQA-B-2016 test data, our RQE system outperformed the best system of the 2016 challenge in all measures with 77.47 MAP and 80.57 Accuracy. On cQA-B-2017 test data, performances of all systems dropped by around 30 points. Our primary system obtained 44.62 MAP, 67.27 Accuracy and 47.25 F1 score. The cQA-B-2017 best system achieved 47.22 MAP and 42.37 F1 score. Our system is ranked sixth in terms of MAP and third in terms of F1 out of 13 participating teams.
机译:本文介绍了我们参与Semeval-2017任务3关于社区问题的应答(CQA)。问题相似性SubTask(b)旨在根据其与原始问题的相似性等待搜索引擎检索的一组相关问题。我们调整了基于功能的系统,用于识别问题括号(RQE)到问题相似性任务。在CQA-B-2016测试数据上进行了测试,我们的RQE系统在所有措施中表现优于2016年挑战的最佳系统,其中包含77.47张地图和80.57准确度。在CQA-B-2017测试数据上,所有系统的性能下降约30分。我们的主要系统获得了44.62张图,67.27精度和47.25 F1得分。 CQA-B-2017最佳系统实现了47.22地图和42.37 F1分数。我们的系统在13个参与的团队中的F1中排名第六。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号