...
首页> 外文期刊>Neural computing & applications >Recurrent neural network with attention mechanism for language model
【24h】

Recurrent neural network with attention mechanism for language model

机译:具有语言模型注意机制的经常性神经网络

获取原文
获取原文并翻译 | 示例

摘要

The rapid growth of the Internet promotes the growth of textual data, and people get the information they need from the amount of textual data to solve problems. The textual data may include some potential information like the opinions of the crowd, the opinions of the product, or some market-relevant information. However, some problems that point to "How to get features from the text" must be solved. The model of extracting the text features by using the neural network method is called neural network language model. The features are based onn-gram Model concept, which are the co-occurrence relationship between the vocabularies. The word vectors are important because the sentence vectors or the document vectors still have to understand the relationship between the words, and based on this, this study discusses the word vectors. This study assumes that the words contain "the meaning in sentences" and "the position of grammar." This study uses recurrent neural network with attention mechanism to establish a language model. This study uses Penn Treebank, WikiText-2, and NLPCC2017 text datasets. According to these datasets, the proposed models provide the better performance by the perplexity.
机译:互联网的快速增长促进了文本数据的增长,人们从文本数据量获取他们需要的信息来解决问题。文本数据可能包括一些潜在的信息,如人群的意见,产品的意见或一些市场相关信息。但是,必须解决某些问题,这一点必须解决“如何从文本中获取功能”。使用神经网络方法提取文本特征的模型称为神经网络语言模型。该特征是基于ONN-GRAN模型概念,这是词汇表之间的共同发生关系。这项载体的载体或文件向量仍然必须了解这些词之间的关系,并且基于这一点,这项研究讨论了这个词。本研究假设这些词包含“句子中的含义”和“语法的位置”。本研究采用了经常性神经网络,具有建立语言模型的注意机制。本研究使用Penn TreeBank,Wikitext-2和NLPCC2017文本数据集。根据这些数据集,所提出的模型通过困惑提供更好的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号