首页> 外文会议>International conference on text, speech and dialogue >Fine-Tuning Word Embeddings for Aspect-Based Sentiment Analysis
【24h】

Fine-Tuning Word Embeddings for Aspect-Based Sentiment Analysis

机译:精调单词嵌入,用于基于方面的情感分析

获取原文

摘要

Nowadays word embeddings, also known as word vectors, play an important role for many natural language processing (NLP) tasks. In general, these word embeddings are learned from unsupervised learning models (e.g. Word2Vec, GloVe) with a large unannotated corpus and they are independent with the task of their application. In this paper we aim to enrich word embeddings by adding more information from a specific task that is the aspect based sentiment analysis. We propose a model using a convolutional neural network that takes a labeled data set, the learned word embeddings from an unsupervised learning model (e.g. Word2Vec) as input and fine-tunes word embeddings to capture aspect category and sentiment information. We conduct experiments on restaurant review data (http://spidr-ursa.rutgers.edu/datasets/). Experimental results show that fine-tuned word embeddings outperform unsuper-visedly learned word embeddings.
机译:如今,词嵌入(也称为词向量)在许多自然语言处理(NLP)任务中发挥着重要作用。通常,这些单词嵌入是从具有大批注解语料库的无监督学习模型(例如Word2Vec,GloVe)中学习的,并且它们独立于其应用任务。在本文中,我们旨在通过添加来自特定任务(基于方面的情感分析)的更多信息来丰富单词嵌入。我们提出了一个使用卷积神经网络的模型,该模型采用带标签的数据集,来自无监督学习模型(例如Word2Vec)的学习单词嵌入作为输入并微调单词嵌入以捕获方面类别和情感信息。我们对餐厅评论数据(http://spidr-ursa.rutgers.edu/datasets/)进行实验。实验结果表明,微调的词嵌入优于无监督学习的词嵌入。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号