首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >Gated Mechanism for Attention Based Multi Modal Sentiment Analysis
【24h】

Gated Mechanism for Attention Based Multi Modal Sentiment Analysis

机译:基于多模态情绪分析的门控机制

获取原文

摘要

Multimodal sentiment analysis has recently gained popularity because of its relevance to social media posts, customer service calls and video blogs. In this paper, we address three aspects of multimodal sentiment analysis; 1. Cross modal interaction learning, i.e. how multiple modalities contribute to the sentiment, 2. Learning long-term dependencies in multimodal interactions and 3. Fusion of unimodal and cross modal cues. Out of these three, we find that learning cross modal interactions is beneficial for this problem. We perform experiments on two benchmark datasets, CMU Multimodal Opinion level Sentiment Intensity (CMU-MOSI) and CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) corpus. Our approach on both these tasks yields accuracies of 83.9% and 81.1% respectively, which is 1.6% and 1.34% absolute improvement over current state-of-the-art.
机译:多模式情绪分析最近获得了流行,因为它与社交媒体帖子,客户服务呼叫和视频博客的相关性。 在本文中,我们解决了多模式情绪分析的三个方面; 1.跨模型交互学习,即多种方式如何为情绪贡献,2.学习多模式相互作用的长期依赖性,3.融合单峰和跨模型线索。 在这三个中,我们发现学习跨模式互动对这个问题有益。 我们对两个基准数据集进行实验,CMU多峰意见级别情绪强度(CMU-MOSI)和CMU多式联明情绪和情感强度(CMU-MOSEI)语料库。 我们对这两项任务的方法产生了83.9%和81.1%的准确度,而目前最先进的绝对改善是1.6%和1.34%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号