首页> 外文会议>International Conference on Artificial Intelligence, Automation and Control Technologies >New Research on Transfer Learning Model of Named Entity Recognition
【24h】

New Research on Transfer Learning Model of Named Entity Recognition

机译:命名实体识别转移学习模型研究

获取原文

摘要

This paper integrates the current Google's most powerful NLP transfer learning model BERT with the traditional state-of-the-art BiLSTM-CRF model to solve the problem of named entity recognition. A bi-directional LSTM model can consider an effectively infinite amount of context on both sides of a word and eliminates the problem of limited context that applies to any feed-forward models. Google's model applied a feedforward neural network, causing its performance to weaken. We seek to solve these issues by proposing a more powerful neural network model named BT-BiLSTM. The new neural network model has obtained F1 scores on three Chinese datasets exceeds the previous BiLSTM-CRF model, especially on the value of recall. It shows the great value of the combination of large scale none-labelled data pre-trained language model with named entity recognition, which inspire new ideas on other future work.
机译:本文将目前的谷歌最强大的NLP转移学习模型与传统的最先进的Bilstm-CRF模型集成,以解决命名实体识别的问题。双向LSTM模型可以在单词的两侧考虑有效的无限量背景,并消除适用于任何前馈模型的有限背景的问题。谷歌的模型应用了一种前馈神经网络,导致其性能削弱。我们寻求通过提出一个名为BT-Bilstm的更强大的神经网络模型来解决这些问题。新的神经网络模型在三个中文集合上获得了F1分数超过了之前的Bilstm-CRF模型,尤其是召回的值。它显示了具有命名实体识别的大规模无标记数据预先训练的语言模型的大量价值,这激发了对其他未来工作的新想法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号