首页> 外文会议>12th IEEE International Conference on Automatic Face and Gesture Recognition >AUMPNet: Simultaneous Action Units Detection and Intensity Estimation on Multipose Facial Images Using a Single Convolutional Neural Network
【24h】

AUMPNet: Simultaneous Action Units Detection and Intensity Estimation on Multipose Facial Images Using a Single Convolutional Neural Network

机译:AUMPNet:使用单个卷积神经网络的多人脸图像同时动作单位检测和强度估计

获取原文
获取原文并翻译 | 示例

摘要

This paper presents an unified convolutional neural network (CNN), named AUMPNet, to perform both Action Units (AUs) detection and intensity estimation on facial images with multiple poses. Although there are a variety of methods in the literature designed for facial expression analysis, only few of them can handle head pose variations. Therefore, it is essential to develop new models to work on non-frontal face images, for instance, those obtained from unconstrained environments. In order to cope with problems raised by pose variations, an unique CNN, based on region and multitask learning, is proposed for both AU detection and intensity estimation tasks. Also, the available head pose information was added to the multitask loss as a constraint to the network optimization, pushing the network towards learning better representations. As opposed to current approaches that require ad hoc models for every single AU in each task, the proposed network simultaneously learns AU occurrence and intensity levels for all AUs. The AUMPNet was evaluated on an extended version of the BP4D-Spontaneous database, which was synthesized into nine different head poses and made available to FG 2017 Facial Expression Recognition and Analysis Challenge (FERA 2017) participants. The achieved results surpass the FERA 2017 baseline, using the challenge metrics, for AU detection by 0.054 in F1-score and 0.182 in ICC(3, 1) for intensity estimation.
机译:本文提出了一个名为AUMPNet的统一卷积神经网络(CNN),以对具有多个姿势的面部图像执行动作单元(AU)检测和强度估计。尽管文献中有多种方法用于面部表情分析,但只有少数方法可以处理头部姿势变化。因此,开发新模型以处理非正面人脸图像(例如从不受约束的环境中获得的图像)非常重要。为了解决姿势变化引起的问题,提出了一种基于区域和多任务学习的独特CNN,用于AU检测和强度估计任务。而且,可用的头部姿势信息被添加到多任务丢失中,作为网络优化的约束,推动网络朝着学习更好的表示的方向发展。与当前的方法需要针对每个任务中的每个AU的临时模型相反,拟议的网络同时了解所有AU的AU发生和强度级别。 AUMPNet在BP4D自发数据库的扩展版本上进行了评估,该数据库已合成为9种不同的头部姿势,可供FG 2017面部表情识别和分析挑战赛(FERA 2017)的参与者使用。使用挑战指标,获得的结果超过了FERA 2017基线,用于F1得分的AU检测为0.054,在ICC(3,1)的AU检测中为强度估计为0.182。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号