首页> 外文期刊>Affective Computing, IEEE Transactions on >Multi-Objective Based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition
【24h】

Multi-Objective Based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition

机译:基于多目标的时空特征表示学习对表情识别的表情强度变化具有鲁棒性

获取原文
获取原文并翻译 | 示例

摘要

Facial expression recognition (FER) is increasingly gaining importance in various emerging affective computing applications. In practice, achieving accurate FER is challenging due to the large amount of inter-personal variations such as expression intensity variations. In this paper, we propose a new spatio-temporal feature representation learning for FER that is robust to expression intensity variations. The proposed method utilizes representative expression-states (e.g., onset, apex and offset of expressions) which can be specified in facial sequences regardless of the expression intensity. The characteristics of facial expressions are encoded in two parts in this paper. As the first part, spatial image characteristics of the representative expression-state frames are learned via a convolutional neural network. Five objective terms are proposed to improve the expression class separability of the spatial feature representation. In the second part, temporal characteristics of the spatial feature representation in the first part are learned with a long short-term memory of the facial expression. Comprehensive experiments have been conducted on a deliberate expression dataset (MMI) and a spontaneous micro-expression dataset (CASME II). Experimental results showed that the proposed method achieved higher recognition rates in both datasets compared to the state-of-the-art methods.
机译:面部表情识别(FER)在各种新兴的情感计算应用程序中越来越重要。在实践中,由于大量的人际差异(例如表情强度差异),实现准确的FER具有挑战性。在本文中,我们为FER提出了一种新的时空特征表示学习方法,该方法对表达强度变化具有鲁棒性。所提出的方法利用了可以在面部序列中指定的代表性表情状态(例如表情的开始,顶点和表情偏移),而与表情强度无关。面部表情的特征在本文中分为两部分。作为第一部分,通过卷积神经网络学习了代表性表达状态帧的空间图像特征。提出了五个客观术语来改善空间特征表示的表达类别可分离性。在第二部分中,通过长期的面部表情记忆来学习第一部分中的空间特征表示的时间特性。在故意表达数据集(MMI)和自发微表达数据集(CASME II)上进行了全面的实验。实验结果表明,与最新方法相比,该方法在两个数据集中均获得了更高的识别率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号