首页> 外文会议>IEEE International Conference on Data Mining >Deep-HOSeq: Deep Higher Order Sequence Fusion for Multimodal Sentiment Analysis
【24h】

Deep-HOSeq: Deep Higher Order Sequence Fusion for Multimodal Sentiment Analysis

机译:深环:多式联情绪分析的深度高阶序列融合

获取原文

摘要

Multimodal sentiment analysis utilizes multiple heterogeneous modalities for sentiment classification. The recent multimodal fusion schemes customize LSTMs to discover intra-modal dynamics and design sophisticated attention mechanisms to discover the inter-modal dynamics from multimodal sequences. Although powerful, these schemes completely rely on attention mechanisms which is problematic due to two major drawbacks 1) deceptive attention masks, and 2) training dynamics. Nevertheless, strenuous efforts are required to optimize hyperparameters of these consolidate architectures, in particular their custom-designed LSTMs constrained by attention schemes. In this research, we first propose a common network to discover both intra-modal and inter-modal dynamics by utilizing basic LSTMs and tensor based convolution networks. We then propose unique networks to encapsulate temporal-granularity among the modalities which is essential while extracting information within asynchronous sequences. We then integrate these two kinds of information via a fusion layer and call our novel multimodal fusion scheme as Deep-HOSeq (Deep network with higher order Common and Unique Sequence information). The proposed Deep-HOSeq efficiently discovers all-important information from multimodal sequences and the effectiveness of utilizing both types of information is empirically demonstrated on CMU-MOSEI and CMU-MOSI benchmark datasets. The source code of proposed Deep-HOSeq is available at https://github.com/sverma88/Deep-HOSeq–ICDM-2020.
机译:多模式情绪分析利用多种异构模式进行情绪分类。最近的多模式融合方案定制LSTMS以发现模态动态和设计复杂的注意机制,以发现来自多模式序列的模态动态。虽然强大,这些方案完全依赖于注意机制,由于两个主要缺点1)欺骗性注意面具和2)训练动态。尽管如此,需要艰苦的努力来优化这些巩固架构的超参数,特别是他们的定制设计的LSTMS受到注意力方案。在这项研究中,我们首先提出了一个公共网络,通过利用基于基于LSTMS和基于卷积的卷积网络来发现模态和模态间动态。然后,我们提出了独特的网络来封装在提取异步序列内的信息的模态之间的时间粒度。然后,我们通过融合层集成这两种信息,并将我们的新型多模式融合方案称为深-HOSEQ(具有高阶通用和唯一序列信息的深网络)。建议的深层hoseq有效地发现了来自多模式序列的全部重要信息,并在CMU-MOSEI和CMU-MOSI基准数据集上证明了利用这两种信息的有效性。提出的Deep-HoseQ的源代码可在https://github.com/sverma88/deep-hoseq-iCdm -2020获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号