首页> 外文期刊>Journal of information science and engineering >A Novel Dual CNN Architecture with LogicMax for Facial Expression Recognition
【24h】

A Novel Dual CNN Architecture with LogicMax for Facial Expression Recognition

机译:具有LogicMax的新型双CNN架构,用于面部表情识别

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Facial expressions convey important features for recognizing human emotions. It is a challenging task to classify accurate facial expressions due to high intra-class correlation. Conventional methods depend on the classification of handcrafted features like scale-invariant feature transform and local binary patterns to predict the emotion. In recent years, deep learning techniques are used to boost the accuracy of FER models. Although it has improved the accuracy in standard datasets, FER models have to consider problems like face occlusion and intra-class variance. In this paper, we have used two convolutional neural networks which have vgg16 architecture as a base network using transfer learning. This paper explains the method to tackle issues on classifying high intra-class correlated facial expressions through an in-depth investigation of the Facial Action Coding System (FACS) action units. We have used a novel LogicMax layer at the end of the model to boost the accuracy of the FER model. Classification metrics like Accuracy, Precision, Recall, and Fl score are calculated for evaluating the model performance on CK+ and JAFFE datasets. The model is tested using 10-fold cross-validation and the obtained classification accuracy rate of 98.62% and 94.86% on CK+ and JAFFE datasets respectively. The experimental results also include a feature map visualization of 64 convolutional filters of the two convolutional neural networks.
机译:面部表情传达了识别人类情绪的重要特征。由于阶级内相关性高,对准确的面部表情进行分类是一个具有挑战性的任务。常规方法取决于手工制作功能的分类,如规模不变特征变换和本地二进制模式以预测情绪。近年来,深入学习技术用于提高FER模型的准确性。虽然它在标准数据集中提高了准确性,但FER模型必须考虑面部遮挡和类内方差等问题。在本文中,我们使用了两个卷积神经网络,其使用传输学习将VGG16架构作为基础网络。本文通过对面部动作编码系统(FACS)动作单元的深入研究来解释了对分类高内相关面部表达的解决方法的方法。我们在模型结束时使用了一种新型LogicMax层来提高FER模型的准确性。计算等准确度,精度,召回和FL分数的分类指标,用于评估CK +和JAFFE数据集的模型性能。使用10倍的交叉验证和所获得的分类精度分别在CK +和Jaffe数据集上使用10倍的交叉验证和94.86%进行测试。实验结果还包括两个卷积神经网络的64个卷积滤波器的特征图可视化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号