首页> 外文期刊>Frontiers of computer science in China >Pose-robust feature learning for facial expression recognition
【24h】

Pose-robust feature learning for facial expression recognition

机译:姿势鲁棒特征学习用于面部表情识别

获取原文
获取原文并翻译 | 示例
           

摘要

Automatic facial expression recognition (FER) from non-frontal views is a challenging research topic which has recently started to attract the attention of the research community. Pose variations are difficult to tackle and many face analysis methods require the use of sophisticated normalization and initialization procedures. Thus head-pose invariant facial expression recognition continues to be an issue to traditional methods. In this paper, we propose a novel approach for pose-invariant FER based on pose-robust features which are learned by deep learning methods - principal component analysis network (PCANet) and convolu-tional neural networks (CNN) (PRP-CNN). In the first stage, unlabeled frontal face images are used to learn features by PCANet. The features, in the second stage, are used as the target of CNN to learn a feature mapping between frontal faces and non-frontal faces. We then describe the non-frontal face images using the novel descriptions generated by the maps, and get unified descriptors for arbitrary face images. Finally, the pose-robust features are used to train a single classifier for FER instead of training multiple models for each specific pose. Our method, on the whole, does not require pose/ landmark annotation and can recognize facial expression in a wide range of orientations. Extensive experiments on two public databases show that our framework yields dramatic improvements in facial expression analysis.
机译:来自非正面视图的自动面部表情识别(FER)是一个具有挑战性的研究主题,最近开始引起研究界的关注。姿势变化很难解决,许多人脸分析方法要求使用复杂的归一化和初始化程序。因此,头姿势不变的面部表情识别仍然是传统方法的问题。在本文中,我们提出了一种基于姿势稳健FER的新颖方法,该姿势稳健特征是通过深度学习方法学习的-主成分分析网络(PCANet)和卷积神经网络(CNN)(PRP-CNN)。在第一阶段,未标记的正面人脸图像被PCANet用于学习特征。在第二阶段中,将特征用作CNN的目标,以学习正面和非正面之间的特征映射。然后,我们使用地图生成的新颖描述来描述非正面人脸图像,并获得针对任意人脸图像的统一描述符。最后,稳健性特征用于训练FER的单个分类器,而不是针对每个特定姿势训练多个模型。总体而言,我们的方法不需要姿势/界标注释,并且可以识别各种方位的面部表情。在两个公共数据库上进行的大量实验表明,我们的框架在面部表情分析方面取得了巨大的进步。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号