...
首页> 外文期刊>Pattern recognition letters >Cross-modal context-gated convolution for multi-modal sentiment analysis
【24h】

Cross-modal context-gated convolution for multi-modal sentiment analysis

机译:多模态情绪分析的跨模型上下文门控卷积

获取原文
获取原文并翻译 | 示例
           

摘要

When inferring sentiments, using verbal clues only is problematic because of the ambiguity. Adding related vocal and visual contexts as complements for verbal clues can be helpful. To infer sentiments from multi-modal temporal sequences, we need to identify both sentiment-related clues and their cross-modal interactions. However, sentiment-related behaviors of different modalities may not occur at the same time. These behaviors and their interactions are also sparse in time, making it hard to infer the correct sentiments. Besides, unaligned sequences from sensors also have varying sampling rates, which amplify the misalignment and sparsity mentioned above. While most previous multi-modal sentiment analysis works only focus on word-aligned sequences, we propose cross-modal context-gated convolution for unaligned sequences. Cross-modal context-gated convolution captures the local cross-modal interactions, dealing with the misalignment while reducing the effect of unrelated information. Cross-modal context gated convolution introduces the concept of cross-modal context gate, enabling itself to catch useful cross-modal interactions more effectively. Cross-modal context-gated convolution also brings more possibilities to the layer design for multi-modal sequential modeling. Experiments on multi-modal sentiment analysis datasets under both word-aligned and unaligned conditions show the validity of our approach. ? 2021 Elsevier B.V. All rights reserved.
机译:在推断情绪时,使用口头线索由于含糊不清,仅存在问题。将相关的声音和视觉上下文添加为口头线索的补充可以有所帮助。从多模态时间序列推断出来,我们需要识别与情绪相关的线索及其跨模型相互作用。但是,不同方式的情绪相关行为可能不会同时发生。这些行为及其相互作用也稀疏,使得难以推断正确的情绪。此外,来自传感器的未对齐序列也具有不同的采样率,其放大了上述未对准和稀疏性。虽然最先前的多模态情绪分析仅适用于单词对齐的序列,但我们提出了用于未对齐的序列的跨模型上下文所连接的卷积。跨模型上下文的卷积捕获了本地跨模型交互,在减少不相关信息的效果的同时处理未对准。跨模型上下文门控卷积介绍了跨模型上下文门的概念,使自己能够更有效地捕获有用的跨模型交互。跨模型上下文卷积还为多模态顺序建模的层设计带来了更多的可能性。单反顺对齐和未对准条件下的多模态情绪分析数据集的实验显示了我们方法的有效性。还是2021 elestvier b.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号