首页> 外文会议>Social Media Mining for Health Applications Workshop Shared Task;International Conference on Computational Linguistics >KFU NLP Team at SMM4H 2020 Tasks: Cross-lingual Transfer Learning with Pretrained Language Models for Drug Reactions
【24h】

KFU NLP Team at SMM4H 2020 Tasks: Cross-lingual Transfer Learning with Pretrained Language Models for Drug Reactions

机译:KFU NLP团队在SMM4H 2020任务:用预训练语言模型进行交叉语言转移,用于药物反应

获取原文

摘要

This paper describes neural models developed for the Social Media Mining for Health (SMM4H) 2020 shared tasks. Specifically, we participated in two tasks. We investigate the use of a language representation model BERT pretrained on a large-scale corpus of 5 million health-related user reviews in English and Russian. The ensemble of neural networks for extraction and normalization of adverse drug reactions ranked first among 7 teams at the SMM4H 2020 Task 3 and obtained a relaxed F1 of 46%. The BERT-based multilingual model for classification of English and Russian tweets that report adverse reactions ranked second among 16 and 7 teams at two first subtasks of the SMM4H 2019 Task 2 and obtained a relaxed Fl of 58% on English tweets and 51% on Russian tweets.
机译:本文介绍了为健康社交媒体开采开发的神经模型(SMM4H)2020年共享任务。 具体来说,我们参加了两个任务。 我们调查使用英语和俄语的500万与健康的用户评论的大规模语料库中的语言表示模型BERT的使用。 神经网络用于提取和正常化的神经网络中的不良药物反应的归一化在SMM4H 2020任务3的7支球队中排名第一,并获得了46%的松弛F1。 基于BERT的大致讲多语言模型,用于在SMM4H 2019年度任务2的两个第一个组织中向16和7支队伍中排名第二,并在英语推文中获得了58%的轻松杂志,俄罗斯队 推文。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号