首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing
【24h】

Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing

机译:划分,征服和合并:具有局部和全局视角的多模式情感计算的层次特征融合网络

获取原文

摘要

We propose a general strategy named 'divide, conquer and combine' for multimodal fusion. Instead of directly fusing features at holistic level, we conduct fusion hierarchically so that both local and global interactions are considered for a comprehensive interpretation of multimodal embeddings. In the 'divide' and 'conquer' stages, we conduct local fusion by exploring the interaction of a portion of the aligned feature vectors across various modalities lying within a sliding window, which ensures that each part of multimodal embeddings are explored sufficiently. On its basis, global fusion is conducted in the 'combine' stage to explore the interconnection across local interactions, via an Attentive Bi-directional Skip-connected LSTM that directly connects distant local interactions and integrates two levels of attention mechanism. In this way, local interactions can exchange information sufficiently and thus obtain an overall view of multimodal information. Our method achieves state-of-the-art performance on multimodal affective computing with higher efficiency.
机译:我们提出了一种称为“分割,征服和结合”的多模态融合通用策略。而不是在整体级别上直接融合特征,我们分层进行融合,这样就可以考虑局部和全局交互来全面解释多模式嵌入。在“划分”和“征服”阶段,我们通过探索部分对齐的特征向量在滑动窗口内跨各种模态的相互作用来进行局部融合,从而确保充分探究多模态嵌入的每个部分。在此基础上,全球融合是在“结合”阶段进行的,通过直接连接遥远的局部相互作用并整合了两个层次的注意力机制的“注意双向跳过LSTM”,探索了局部相互作用之间的相互联系。这样,本地交互可以充分交换信息,从而获得多模式信息的整体视图。我们的方法以更高的效率在多模式情感计算上实现了最先进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号