首页> 外文会议>International Workshop on Semantic Evaluation >CUHK at SemEval-2020 Task 4: CommonSense Explanation, Reasoning and Prediction with Multi-task Learning
【24h】

CUHK at SemEval-2020 Task 4: CommonSense Explanation, Reasoning and Prediction with Multi-task Learning

机译:Semeval-2020的Cuhk任务4:与多任务学习的致辞解释,推理和预测

获取原文

摘要

This paper describes our system submitted to task 4 of SemEval 2020: Commonsense Validation and Explanation (ComVE) which consists of three sub-tasks. The challenge is to directly validate whether the system can recognize natural language statements that make sense from those that do not, and also require to generate reasonable explanation. Based on BERT architecture with multi-task setting, we propose an effective and interpretable "Explain, Reason and Predict" (ERP) system to solve the three sub-tasks about commonsense: (a) Validation, and (c) Explanation, (b) Reasoning, following the order of the competition. Inspired by cognitive studies of common sense, our system first generate a reason or understanding of the sentences and then choose which one statement makes sense, which is achieved by multi-task learning. The rational experiment validates our assumption and boost the performance. During the post-evaluation, our system has reached 92.9% accuracy in subtask A (rank 11), 89.7% accuracy in subtask B (rank 8). and BLEU score of 12.9 in subtask C (rank 9).
机译:本文介绍了我们提交给Semeval 2020的任务4的系统:具有三个子任务组成的致辞验证和解释(Comve)。挑战是直接验证系统是否能够识别自然语言陈述,这些语言语言陈述从没有那些不具有的人,也需要产生合理的解释。基于具有多任务设置的BERT架构,我们提出了一个有效和可解释的“解释,原因和预测”(ERP)系统,以解决关于偶数的三个子任务:(a)验证,(c)解释,(b )推理,按照竞争的顺序。灵感来自常识的认知研究,我们的系统首先为句子创造一个原因或理解,然后选择一个语句是有道理的,这是通过多任务学习实现的。 Rational实验验证了我们的假设并提高了性能。在评估后,我们的系统在子任务A(等级11)中达到了92.9%,精度为89.7%b中的B(等级8)。和Bleu分数为12.9的子任务C(等级9)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号