首页> 外文期刊>International Journal of Pattern Recognition and Artificial Intelligence >Joint Pre-Trained Chinese Named Entity Recognition Based on Bi-Directional Language Model
【24h】

Joint Pre-Trained Chinese Named Entity Recognition Based on Bi-Directional Language Model

机译:基于双向语言模型的联合预先培训的中文命名实体识别

获取原文
获取原文并翻译 | 示例

摘要

The current named entity recognition (NER) is mainly based on joint convolution or recurrent neural network. In order to achieve high performance, these networks need to provide a large amount of training data in the form of feature engineering corpus and lexicons. Chinese NER is very challenging because of the high contextual relevance of Chinese characters, that is, Chinese characters and phrases may have many possible meanings in different contexts. To this end, we propose a model that leverages a pre-trained and bi-directional encoder representations-from-transformers language model and a joint bi-directional long short-term memory (Bi-LSTM) and conditional random fields (CRF) model for Chinese NER. The underlying network layer embeds Chinese characters and outputs character-level representations. The output is then fed into a bidirectional long short-term memory to capture contextual sequence information. The top layer of the proposed model is CRF, which is used to take into account the dependencies of adjacent tags and jointly decode the optimal chain of tags. A series of extensive experiments were conducted to research the useful improvements of the proposed neural network architecture on different datasets without relying heavily on handcrafted features and domain-specific knowledge. Experimental results show that the proposed model is effective, and character-level representation is of great significance for Chinese NER tasks. In addition, through this work, we have composed a new informal conversation message corpus called the autonomous bus information inquiry dataset, and compared to the advanced baseline, our method has been significantly improved.
机译:目前的命名实体识别(NER)主要基于联合卷积或经常性神经网络。为了实现高性能,这些网络需要以特征工程语料库和词汇形式提供大量培训数据。由于汉字的高层语境相关性,中国人非常具有挑战性,即汉字和短语可能在不同的背景下具有许多可能的含义。为此,我们提出了一种模型,它利用预训练和双向编码器表示 - 变压器语言模型和联合双向长期短期记忆(BI-LSTM)和条件随机字段(CRF)模型对于中国人。底层网络层嵌入汉字并输出字符级表示。然后将输出馈入双向短期内存以捕获上下文序列信息。所提出的模型的顶层是CRF,其用于考虑相邻标签的依赖关系并共同解码最佳标签链。进行了一系列广泛的实验,以研究不同数据集上提出的神经网络架构的有用改进,而不依赖于手工特征和特定于域的知识。实验结果表明,该拟议的模型是有效的,性格级别表示对中国人任务具有重要意义。此外,通过这项工作,我们组成了一个名为自主总线信息查询数据集的新的非正式对话消息语料库,并与高级基线相比,我们的方法得到了显着提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号