首页> 外文期刊>IEEE transactions on multimedia >Face Expression Recognition by Cross Modal Data Association
【24h】

Face Expression Recognition by Cross Modal Data Association

机译:跨模态数据关联的人脸表情识别

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

We present a novel facial expression recognition framework using audio-visual information analysis. We propose to model the cross-modality data correlation while allowing them to be treated as asynchronous streams. We also show that our framework can improve the recognition performance while significantly reducing the computational cost by avoiding redundant or insignificant frame processing by incorporating auditory information. In particular, we design a single good image representation of image sequence by weighted sums of registered face images where the weights are derived using auditory features. We use a still image based technique for the expression recognition task. Our framework, however, can be generalized to work with dynamic features as well. We performed experiments using eNTERFACE'05 audio-visual emotional database containing six archetypal emotion classes: Happy, Sad, Surprise, Fear, Anger and Disgust. We present one-to-one binary classification as well as multi-class classification performances evaluated using both subject dependent and independent strategies. Furthermore, we compare multi-class classification accuracies with those of previously published literature which use the same database. Our analyses show promising results.
机译:我们提出了一种使用视听信息分析的新颖面部表情识别框架。我们建议对跨模式数据相关性进行建模,同时将它们视为异步流。我们还表明,我们的框架可以通过合并听觉信息来避免冗余或无关紧要的帧处理,从而提高识别性能,同时显着降低计算成本。特别是,我们通过配准人脸图像的加权总和来设计图像序列的单个良好图像表示,其中权重是使用听觉特征得出的。我们将基于静止图像的技术用于表情识别任务。但是,我们的框架也可以泛化为与动态功能一起使用。我们使用eNTERFACE'05视听情感数据库进行了实验,该数据库包含六个原型情感类别:快乐,悲伤,惊奇,恐惧,愤怒和厌恶。我们介绍了一对一的二进制分类以及使用主题相关和独立策略评估的多类分类性能。此外,我们将多类分类的准确性与使用同一数据库的先前发表的文献进行了比较。我们的分析显示出令人鼓舞的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号