首页> 外文会议>Conference of the European Chapter of the Association for Computational Linguistics >Contextual Bidirectional Long Short-Term Memory Recurrent Neural Network Language Models: A Generative Approach to Sentiment Analysis
【24h】

Contextual Bidirectional Long Short-Term Memory Recurrent Neural Network Language Models: A Generative Approach to Sentiment Analysis

机译:语境双向长期内记忆经常性神经网络语言模型:一种情绪分析的生成方法

获取原文

摘要

Traditional learning-based approaches to sentiment analysis of written text use the concept of bag-of-words or bag-of-n-grams, where a document is viewed as a set of terms or short combinations of terms disregarding grammar rules or word order. Novel approaches de-emphasize this concept and view the problem as a sequence classification problem. In this context, recurrent neural networks (RNNs) have achieved significant success. The idea is to use RNNs as discriminative bi nary classifiers to predict a positive or neg ative sentiment label at every word posi tion then perform a type of pooling to get a sentence-level polarity. Here, we investi gate a novel generative approach in which a separate probability distribution is esti mated for every sentiment using language models (LMs) based on long short-term memory (LSTM) RNNs. We introduce a novel type of LM using a modified version of bidirectional LSTM (BLSTM) called contextual BLSTM (cBLSTM), where the probability of a word is estimated based on its full left and right contexts. Our ap proach is compared with a BLSTM binary classifier. Significant improvements are observed in classifying the IMDB movie review dataset. Further improvements are achieved via model combination.
机译:基于传统的学习文本的情感分析方法使用袋式袋或n-gram的概念,其中文档被视为忽视语法规则或单词订单的一系列术语或短期组合。新颖方法取消了这种概念,并将问题视为序列分类问题。在这种情况下,经常性神经网络(RNN)取得了重大成功。该想法是使用RNN作为鉴别性的BI NARY分类器,以预测每个单词Posi Tion的正面或Neg ative情绪标签,然后执行一种汇集以获得句子级极性。在这里,我们投资于一种新的生成方法,其中基于长短期存储器(LSTM)RNNS的语言模型(LMS)为每种情绪提供单独的概率分布。我们使用称为上下文blstm(CBLSTM)的双向LSTM(BLSTM)的修改版本来介绍一种新颖的LM,其中,基于其完整的左和右上下文来估计单词的概率。我们的AP Proach与BLSTM二进制分类器进行比较。在分类IMDB电影审核数据集中,观察到显着改进。通过模型组合实现了进一步的改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号