首页> 外文会议>International Workshop on Semantic Evaluation >JUST at SemEval-2020 Task 11: Detecting Propaganda Techniques using BERT Pretrained Model
【24h】

JUST at SemEval-2020 Task 11: Detecting Propaganda Techniques using BERT Pretrained Model

机译:就在Semeval-2020任务11:使用BERT净化模型检测宣传技术

获取原文

摘要

This paper presents the JUST team submission to semeval-2020 task 11, Detection of Propaganda Techniques in News Articles. Knowing that there are two subtasks in this competition, we have participated in the Technique Classification subtask (TC), which aims to identify the propaganda techniques used in specific propaganda fragments. We have used and implemented various models to detect propaganda. Our proposed model is based on BERT uncased pre-trained language model as it has achieved state-of-the-art performance on multiple NLP benchmarks. The performance result of our proposed model has scored 0.55307 F1-Score, which outperforms the baseline model provided by the organizers with 0.2519 F1-Score, and our model is 0.07 away from the best performing team. Compared to other participating systems, our submission is ranked 15th out of 31 participants.
机译:本文介绍了Semeval-2020任务11的刚刚的团队提交,检测新闻文章中的宣传技术。 知道这次竞争中有两个子任务,我们参与了技术分类子任务(TC),旨在识别特定宣传片段中使用的宣传技术。 我们已经使用并实施了各种模型来检测宣传。 我们所提出的模型基于BERT未出售的预培训语言模型,因为它已经在多个NLP基准上实现了最先进的性能。 我们拟议模型的绩效结果得到了0.55307 F1分数,这胜过组织者提供0.2519 F1分数的基线模型,而我们的型号距离最好的表演团队是0.07的距离。 与其他参与系统相比,我们的提交排名在31名参与者中的15名。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号