首页> 外文会议>Aisa-Pacific web and web-age information management joint conference on web and big data >Utilizing BERT Pretrained Models with Various Fine-Tune Methods for Subjectivity Detection
【24h】

Utilizing BERT Pretrained Models with Various Fine-Tune Methods for Subjectivity Detection

机译:利用伯特净化模型采用各种微调方法进行主观性检测

获取原文
获取外文期刊封面目录资料

摘要

As an essentially antecedent task of sentiment analysis, subjectivity detection refers to classifying sentences to be subjective ones containing opinions, or objective and neutral ones without bias. In the situations where impartial language is required, such as Wikipedia, subjectivity detection could play an important part. Recently, pretrained language models have proven to be effective in learning representations, profoundly boosting the performance among several NLP tasks. As a state-of-art pretrained model, BERT is trained on large unlabeled data with masked word prediction and next sentence prediction tasks. In this paper, we mainly explore utilizing BERT pretrained models with several combinations of fine-tuning methods, holding the intention to enhance performance in subjectivity detection task. Our experimental results reveal that optimum combinations of fine-tune and multi-task learning surplus the state-of-the-art on subjectivity detection and related tasks.
机译:作为情感分析的基本前进任务,主体性检测是指将句子分类为具有含有意见的主观性的句子,或无偏见的客观和中立者。在需要公正语言的情况下,例如维基百科,主体性检测可以发挥重要作用。最近,预先预订的语言模型已经证明是有效的学习陈述,深刻地提高了几个NLP任务之间的性能。作为一种最先进的预制模型,BERT在具有屏蔽字预测和下一个句子预测任务的大型未标记数据上培训。在本文中,我们主要探索利用伯特净化模型进行多种微调方法,持有提高主观性检测任务中性能的意图。我们的实验结果表明,最佳的微调和多任务学习盈余的最佳组合是最先进的主观性检测和相关任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号