...
【24h】

Context-dependent emotion recognition

机译:上下文相关的情绪识别

获取原文
获取原文并翻译 | 示例

摘要

Most previous methods for emotion recognition focus on facial emotion and ignore the rich context information that implies important emotion states. To make full use of the contextual information to make up for the facial information, we propose the Context-Dependent Net (CD-Net) for robust context-aware human emotion recognition. Inspired by the long-range dependency of the transformer, we introduce the tubal transformer which forms the shared feature representation space to facilitate the interactions among the face, body, and context features. Besides, we introduce the hierarchical feature fusion to recombine the enhanced multi-scale face, body, and context features for emotion classification. Experimentally, we verify the effectiveness of the proposed CD-Net on the two large emotion datasets, CAER-S and EMOTIC. On the one hand, the quantitative evaluation results demonstrate the superiority of the proposed CD-Net over other state-of-the-art methods. On the other hand, the visualization results show CD-Net can capture the dependencies among the face, body, and context components and focus on the important features related to the emotion.
机译:以前的大多数情绪识别方法都集中在面部情绪上,而忽略了暗示重要情绪状态的丰富上下文信息。为了充分利用上下文信息来弥补面部信息,我们提出了上下文相关网络(CD-Net),用于强大的上下文感知人类情感识别。受 transformer 的长程依赖的启发,我们引入了 tubal transformer,它形成了共享的特征表示空间,以促进面部、身体和上下文特征之间的交互。此外,我们还引入了分层特征融合,以重新组合增强的多尺度面部、身体和上下文特征,用于情绪分类。通过实验,我们验证了所提出的CD-Net在CAER-S和EMOTIC两个大型情绪数据集上的有效性。一方面,定量评估结果表明了所提出的CD-Net优于其他最先进的方法。另一方面,可视化结果表明,CD-Net可以捕获面部、身体和情境组件之间的依赖关系,并关注与情绪相关的重要特征。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号