首页> 外文会议>International Workshop on Semantic Evaluation >NLP@ JUST at SemEval-2020 Task 4: Ensemble Technique for BERT and Roberta to Evaluate Commonsense Validation
【24h】

NLP@ JUST at SemEval-2020 Task 4: Ensemble Technique for BERT and Roberta to Evaluate Commonsense Validation

机译:NLP @只是在Semeval-2020任务4:Bert和Roberta的集合技术评估致辞验证

获取原文

摘要

This paper presents the work of the NLP@JUST team at SemEval-2020 Task 4 competition that related to commonsense validation and explanation (ComVE) task. The team participates in sub-taskA (Validation) which related to validation that checks if the text is against common sense or not. Several models have trained (i.e. Bert, XLNet, and Roberta), however, the main models used are the RoBERTa-large and BERT Whole word masking. As well as, we utilized the results from both models to generate final prediction by using the average Ensemble technique, that used to improve the overall performance. The evaluation result shows that the implemented model achieved an accuracy of 93.9% obtained and published at the post-evaluation result on the leaderboard.
机译:本文介绍了NLP @只是团队在Semeval-2020任务4的任务4竞争,与致辞验证和解释(COMVE)任务相关。 该团队参与与验证相关的子Taska(验证),以检查文本是否违反常识。 几种型号训练(即BERT,XLNET和Roberta),但是,所使用的主要模型是罗伯塔大而且伯特整理屏蔽。 除了,我们利用两种模型的结果通过使用用于提高整体性能的平均集合技术来产生最终预测。 评估结果表明,实施的模型达到了93.9%的准确性,在排行榜上评估后的评估结果。

著录项

相似文献

  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号