...
首页> 外文期刊>Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on >Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image
【24h】

Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image

机译:使用情感头像图像了解视频中的离散面部表情

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Existing video-based facial expression recognition techniques analyze the geometry-based and appearance-based information in every frame as well as explore the temporal relation among frames. On the contrary, we present a new image-based representation and an associated reference image called the emotion avatar image (EAI), and the avatar reference, respectively. This representation leverages the out-of-plane head rotation. It is not only robust to outliers but also provides a method to aggregate dynamic information from expressions with various lengths. The approach to facial expression analysis consists of the following steps: 1) face detection; 2) face registration of video frames with the avatar reference to form the EAI representation; 3) computation of features from EAIs using both local binary patterns and local phase quantization; and 4) the classification of the feature as one of the emotion type by using a linear support vector machine classifier. Our system is tested on the Facial Expression Recognition and Analysis Challenge (FERA2011) data, i.e., the Geneva Multimodal Emotion Portrayal-Facial Expression Recognition and Analysis Challenge (GEMEP-FERA) data set. The experimental results demonstrate that the information captured in an EAI for a facial expression is a very strong cue for emotion inference. Moreover, our method suppresses the person-specific information for emotion and performs well on unseen data.
机译:现有的基于视频的面部表情识别技术对每个帧中的基于几何和基于外观的信息进行分析,并探索帧之间的时间关系。相反,我们分别提出了一个基于图像的新表示形式和一个相关的参考图像,称为情感化身图像(EAI)和化身参考。此表示利用平面外头部旋转。它不仅对异常值具有鲁棒性,而且提供了一种从各种长度的表达式中聚合动态信息的方法。面部表情分析方法包括以下步骤:1)人脸检测; 2)用化身参考对视频帧进行面部配准,以形成EAI表示; 3)使用局部二进制模式和局部相位量化从EAI计算特征; 4)通过使用线性支持向量机分类器将特征分类为情感类型之一。我们的系统已通过面部表情识别和分析挑战(FERA2011)数据进行了测试,即日内瓦多模态情感刻画面部表情识别和分析挑战(GEMEP-FERA)数据集。实验结果表明,在EAI中获取的面部表情信息对于情绪推断具有很强的暗示作用。而且,我们的方法抑制了特定于人的情感信息,并且在看不见的数据上表现良好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号