首页> 外文期刊>BMC Bioinformatics >A neural network multi-task learning approach to biomedical named entity recognition
【24h】

A neural network multi-task learning approach to biomedical named entity recognition

机译:用于生物医学命名实体识别的神经网络多任务学习方法

获取原文
           

摘要

Background Named Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated datasets, which are expensive to develop and thus limited in size. Since such datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity datasets. Additionally, we investigated the effect of dataset size on performance in both single- and multi-task settings. Results We present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning. With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one dataset, performance improves significantly for five datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any dataset, and performance improves significantly for six datasets by up to 1.1%. The dataset size experiments found that as dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model. Conclusions Our results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER dataset. We also found that Multi-task Learning is beneficial for small datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.
机译:背景技术命名实体识别(NER)是生物医学文本挖掘中的关键任务。准确的NER系统需要特定于任务的手动注释数据集,这些数据集开发成本高昂,因此规模有限。由于此类数据集包含相关但不同的信息,因此一个有趣的问题是,是否有可能将它们一起使用以提高NER性能。为了对此进行调查,我们开发了有监督的多任务卷积神经网络模型,并将其应用于大量不同的现有生物医学命名实体数据集。此外,我们研究了数据集大小对单任务和多任务设置中性能的影响。结果我们提出了NER的单任务模型,多输出多任务模型和从属多任务模型。我们将这三个模型应用于15个生物医学数据集,其中包含多个命名实体,包括解剖结构,化学,疾病,基因/蛋白质和物种。每个数据集代表一个任务。然后将单任务模型和多任务模型的结果进行比较,以证明多任务学习的好处。与单任务模型相比,通过多输出多任务模型,我们观察到平均F分数提高了0.8%,而平均基线为78.4%。尽管一个数据集的性能显着下降,但五个数据集的性能显着提高了6.3%。对于从属多任务模型,与单任务模型相比,我们观察到平均提高了0.4%。任何数据集上的性能都没有显着下降,六个数据集的性能显着提高了1.1%。数据集大小的实验发现,随着数据集大小的减少,与单任务模型相比,多输出模型的性能有所提高。使用50%,25%和10%的训练数据,单任务模型分别平均下降约3.4、8和16.7%,而多任务模型分别下降约0.2、3.0和9.8%。结论我们的结果表明,与在单个NER数据集上训练的单任务模型相比,平均而言,多任务模型产生的NER结果更好。我们还发现,多任务学习对小型数据集有益。在各种设置中,改进是显着的,证明了多任务学习对此任务的好处。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号