首页> 外文会议>Conference on Multimedia Information Processing and Retrieval >Feature-Level and Model-Level Audiovisual Fusion for Emotion Recognition in the Wild
【24h】

Feature-Level and Model-Level Audiovisual Fusion for Emotion Recognition in the Wild

机译:用于野外情感识别的特征级和模型级视听融合

获取原文

摘要

Emotion recognition plays an important role in human-computer interaction (HCI) and has been extensively studied for decades. Although tremendous improvements have been achieved for posed expressions, recognizing human emotions in "close-to-real-world" environments remains a challenge. In this paper, we proposed two strategies to fuse information extracted from different modalities, i.e., audio and visual. Specifically, we utilized LBP-TOP, an ensemble of CNNs, and a bi-directional LSTM (BLSTM) to extract features from the visual channel and the OpenSmile toolkit to extract features from the audio channel, respectively. Two kinds of fusion methods, i, e., feature-level fusion and model-level fusion, were developed to utilize the information extracted from the two channels. Experimental results on the EmotiW2018 AFEW dataset have shown that the proposed fusion methods outperform the baseline methods significantly and achieve comparable performance compared with the state-of-the-art methods, where the model-level fusion performs better when one of the channels totally fails.
机译:情绪识别在人机交互(HCI)中起着重要作用,并且已经进行了数十年的广泛研究。尽管对于姿势表达已经取得了巨大的进步,但是在“接近真实世界”的环境中识别人类情绪仍然是一个挑战。在本文中,我们提出了两种融合从不同方式(即音频和视觉)中提取的信息的策略。具体来说,我们利用LBP-TOP,CNN的集成和双向LSTM(BLSTM)从视觉通道中提取特征,并利用OpenSmile工具包从音频通道中提取特征。开发了两种融合方法,即特征级融合和模型级融合,以利用从这两个通道提取的信息。在EmotiW2018 AFEW数据集上的实验结果表明,与最新方法相比,所提出的融合方法明显优于基线方法,并具有可比的性能,在现有方法中,当其中一个通道完全失效时,模型级融合效果更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号