Abstract Multi-modality weakly labeled sentiment learning based on Explicit Emotion Signal for Chinese microblog
首页> 外文期刊>Neurocomputing >Multi-modality weakly labeled sentiment learning based on Explicit Emotion Signal for Chinese microblog
【24h】

Multi-modality weakly labeled sentiment learning based on Explicit Emotion Signal for Chinese microblog

机译:基于显性情绪信号的中国微博多模态弱标签情感学习

获取原文
获取原文并翻译 | 示例
       

摘要

AbstractUnderstanding the sentiments of users from cross media contents which contain texts and images is an important task for many social network applications. However, due to the semantic gap between cross media features and sentiments, machine learning methods need a lot of human labeled samples. Furthermore, for each kind of media content, it is necessary to constantly add a lot of new human labeled samples because of new expressions of sentiments. Fortunately, there are some emotion signals, like emoticons, which denote users’ emotions in cross media contents. In order to use these weakly labels to build a unified multi-modality sentiment learning framework, we propose an Explicit Emotion Signal (EES) based multi-modality sentiment learning approach which uses huge number of weakly labeled samples in sentiment learning. There are three advantages in our approach. Firstly, only a few human labeled samples are needed to reach the same performance which can be obtained by the traditional machine learning based sentiment prediction approaches. Secondly, this approach is flexible and can easily combine text and vision based sentiment learning through deep neural networks. Thirdly, because a lot of weakly labeled samples can be used in EES, trained model is more robust in different domain transfer. In this paper, firstly, we investigate the correlation between sentiments and emoticons and choose emoticons as the Explicit Emotion Signals in our approach; secondly, we build a two stages multi-modality sentiment learning framework based on Explicit Emotion Signals. Our experiment results show that our approach not only achieves the best performance but also only needs 3% and 43% training samples to obtain the same performance of Visual Geometry Group (VGG) model and Long Short-Term Memory (LSTM) model in images and texts, respectively.
机译: 摘要 从包含文本和图像的跨媒体内容中了解用户的情感是许多社交网络应用程序的一项重要任务。但是,由于跨媒体功能和情感之间的语义鸿沟,机器学习方法需要大量的人工标记样本。此外,对于每种媒体内容,由于情感的新表达,有必要不断添加许多新的人类标记样本。幸运的是,有一些情绪信号,例如表情符号,表示用户在跨媒体内容中的情绪。为了使用这些弱标签来构建统一的多模态情感学习框架,我们提出了一种基于显式情感信号(EES)的多模态情感学习方法,该方法在情感学习中使用了大量的弱标签样本。我们的方法有三个优点。首先,仅需少量人类标记样本即可达到可以通过传统的基于机器学习的情绪预测方法获得的相同性能。其次,这种方法很灵活,可以通过深度神经网络轻松地将基于文本和视觉的情感学习结合起来。第三,由于在EES中可以使用许多标记较弱的样本,因此经过训练的模型在不同的域转移中更加健壮。在本文中,首先,我们研究了情绪与表情符号之间的相关性,并选择表情符号作为显式情绪信号;其次,基于显性情绪信号建立了两阶段的多模态情感学习框架。我们的实验结果表明,我们的方法不仅获得最佳性能,而且还需要3%和43%的训练样本才能在图像和图像中获得相同的视觉几何组(VGG)模型和长短期记忆(LSTM)模型性能。

著录项

  • 来源
    《Neurocomputing》 |2018年第10期|258-269|共12页
  • 作者单位

    Cognitive Science Department, Xiamen University,Fujian Key Laboratory of Brain-inspired Computing Technique and Applications, Xiamen University,Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University;

    Cognitive Science Department, Xiamen University,Fujian Key Laboratory of Brain-inspired Computing Technique and Applications, Xiamen University;

    Cognitive Science Department, Xiamen University,Fujian Key Laboratory of Brain-inspired Computing Technique and Applications, Xiamen University,Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University;

    Cognitive Science Department, Xiamen University,Fujian Key Laboratory of Brain-inspired Computing Technique and Applications, Xiamen University;

    College of Mathematics and Computer Science, Fuzhou University;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Explicit Emotion Signal; Multi-modality sentiment learning; Cross media; Weakly labeled sample; Domain transfer;

    机译:显性情绪信号;多模态情感学习;跨媒体;标记薄弱的样本;域转移;
  • 入库时间 2022-08-18 02:05:24

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号