首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing
【24h】

Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing

机译:分割,征服和组合:具有本地和全局视角的分层特征融合网络,用于多模式情感计算

获取原文

摘要

We propose a general strategy named 'divide, conquer and combine' for multimodal fusion. Instead of directly fusing features at holistic level, we conduct fusion hierarchically so that both local and global interactions are considered for a comprehensive interpretation of multimodal embeddings. In the 'divide' and 'conquer' stages, we conduct local fusion by exploring the interaction of a portion of the aligned feature vectors across various modalities lying within a sliding window, which ensures that each part of multimodal embeddings are explored sufficiently. On its basis, global fusion is conducted in the 'combine' stage to explore the interconnection across local interactions, via an Attentive Bi-directional Skip-connected LSTM that directly connects distant local interactions and integrates two levels of attention mechanism. In this way, local interactions can exchange information sufficiently and thus obtain an overall view of multimodal information. Our method achieves state-of-the-art performance on multimodal affective computing with higher efficiency.
机译:我们提出了一个名为“划分,征服和结合”的一般战略,以进行多模式融合。我们在层次结构上进行聚变,而不是直接融合功能,以便考虑到本地和全局交互,以综合对多式联运嵌入的诠释。在“划分”和“征服”阶段中,我们通过探索跨越滑动窗口内的各种方式的一部分对准特征向量的相互作用来进行局部融合,这确保了充分探讨了多模式嵌入的各部分。在其基础上,全球融合在“组合”阶段进行,以探索局部交互的互连,通过直接连接遥控局部相互作用并集成两个级别的注意机制。以这种方式,局部交互可以充分交换信息,从而获得多模式信息的整体视图。我们的方法以更高效率的多模式情感计算实现最先进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号