首页> 外文会议>12th IEEE International Conference on Automatic Face and Gesture Recognition >Multi View Facial Action Unit Detection Based on CNN and BLSTM-RNN
【24h】

Multi View Facial Action Unit Detection Based on CNN and BLSTM-RNN

机译:基于CNN和BLSTM-RNN的多视图面部动作单元检测

获取原文
获取原文并翻译 | 示例

摘要

This paper presents our work in the FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) and we participate in the AU occurrence sub-challenge. Our work of AU occurrence recognition is based on deep learning, and we design convolution neural network (CNN) models for two types of work: facial view recognition and AU occurrence recognition. For facial view recognition, our model could achieve 97.7% accuracy on validation dataset about 9 facial views. For AU occurrence recognition, we use both visual features and temporal information of dataset. We use CNN models to get deep visual feature and then use BLSTM-RNN to learn the high-level feature in the time domain. When training models, we divide dataset into 9 parts based on 9 facial views, and each model is trained in a specific view. When recognizing AUs, we recognize facial view first and then choose the corresponding model for AU occurrence recognition. Finally, our method shows good performance, the F1 score of test data is 0.507 and the accuracy is 0.735.
机译:本文介绍了我们在FG 2017面部表情识别和分析挑战赛(FERA 2017)中的工作,我们参加了AU发生子挑战。我们的AU发生识别工作基于深度学习,我们针对两种类型的工作设计了卷积神经网络(CNN)模型:面部视图识别和AU发生识别。对于面部视图识别,我们的模型在约9个面部视图的验证数据集上可以达到97.7%的准确性。对于AU发生识别,我们同时使用视觉特征和数据集的时间信息。我们使用CNN模型来获得深层的视觉特征,然后使用BLSTM-RNN来学习时域中的高级特征。在训练模型时,我们基于9个面部视图将数据集分为9个部分,并且每个模型都在特定视图中进行训练。在识别非盟友时,我们先识别人脸,然后选择相应的模型进行非盟友情识别。最终,我们的方法表现出良好的性能,测试数据的F1得分为0.507,准确度为0.735。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号