首页> 外文会议>Workshop on computational approaches to subjectivity, sentiment and social media analysis >UBC-NLP at IEST 2018: Learning Implicit Emotion With an Ensemble of Language Models
【24h】

UBC-NLP at IEST 2018: Learning Implicit Emotion With an Ensemble of Language Models

机译:2018年的IBC-NLP:使用语言模型的集合学习隐含的情绪

获取原文

摘要

We describe UBC-NLP contribution to IEST-2018, focused at learning implicit emotion in Twitter data. Among the 30 participating teams, our system ranked the 4th (with 69.3% F-score). Post competition, we were able to score slightly higher than the 3rd ranking system (reaching 70.7%). Our system is trained on top of a pre-trained language model (LM), fine-tuned on the data provided by the task organizers. Our best results are acquired by an average of an ensemble of language models. We also offer an analysis of system performance and the impact of training data size on the task. For example, we show that training our best model for only one epoch with < 40% of the data enables better performance than the baseline reported by Klinger et al. (2018) for the task.
机译:我们描述了UBC-NLP对Iest-2018的贡献,专注于在Twitter数据中学习隐含的情绪。在30个参与的团队中,我们的系统排名第4(F分数69.3%)。发布竞争,我们能够略高于第三排名系统(达到70.7%)。我们的系统在预先训练的语言模型(LM)之上,微调任务组织者提供的数据。我们的最佳结果是通过一个语言模型的集合的平均收购。我们还提供了对系统性能的分析和培训数据规模对任务的影响。例如,我们显示培训我们只有一个时代的最佳模型,其中40%的数据可以实现比Klinger等人报告的基线更好的性能。 (2018)为任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号