首页> 外文会议>Conference on empirical methods in natural language processing >Attention-based LSTM Network for Cross-Lingual Sentiment Classification
【24h】

Attention-based LSTM Network for Cross-Lingual Sentiment Classification

机译:基于关注的LSTM网络,用于交叉情绪分类

获取原文

摘要

Most of the state-of-the-art sentiment classification methods are based on supervised learning algorithms which require large amounts of manually labeled data. However, the labeled resources are usually imbalanced in different languages. Cross-lingual sentiment classification tackles the problem by adapting the sentiment resources in a resource-rich language to resource-poor languages. In this study, we propose an attention-based bilingual representation learning model which learns the distributed semantics of the documents in both the source and the target languages. In each language, we use Long Short Term Memory (LSTM) network to model the documents, which has been proved to be very effective for word sequences. Meanwhile, we propose a hierarchical attention mechanism for the bilingual LSTM network. The sentence-level attention model learns which sentences of a document are more important for determining the overall sentiment while the word-level attention model learns which words in each sentence are decisive. The proposed model achieves good results on a benchmark dataset using English as the source language and Chinese as the target language.
机译:大多数最先进的情绪分类方法基于监督的学习算法,需要大量手动标记的数据。但是,标记的资源通常以不同的语言不平衡。交叉语言情绪分类通过将资源丰富的语言中的情感资源适应资源差的语言来解决问题。在这项研究中,我们提出了一种基于关注的双语表示学习模型,它在源和目标语言中学习文档的分布式语义。在每种语言中,我们使用长短短期内存(LSTM)网络来模拟文档,这些文档已被证明对Word序列非常有效。同时,我们提出了双语LSTM网络的分层关注机制。句子级注意模型了解文件的哪个句子对于确定整体情绪更为重要,而单词级注意模型得知每个句子中的哪个词是决定性的。拟议的模型在基准数据集上使用英语作为源语言和中文作为目标语言,实现了良好的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号