首页> 外文会议>European conference on computer vision >Deep Multi-task Learning to Recognise Subtle Facial Expressions of Mental States
【24h】

Deep Multi-task Learning to Recognise Subtle Facial Expressions of Mental States

机译:深度多任务学习以识别心理状态的微妙表情

获取原文

摘要

Facial expression recognition is a topical task. However, very little research investigates subtle expression recognition, which is important for mental activity analysis, deception detection, etc. We address subtle expression recognition through convolutional neural networks (ONNs) by developing multi-task learning (MTL) methods to effectively leverage a side task: facial landmark detection. Existing MTL methods follow a design pattern of shared bottom CNN layers and task-specific top layers. However, the sharing architecture is usually heuristically chosen, as it is difficult to decide which layers should be shared. Our approach is composed of (1) a novel MTL framework that automatically learns which layers to share through optimisation under tensor trace norm reg-ularisation and (2) an invariant representation learning approach that allows the CNN to leverage tasks defined on disjoint datasets without suffering from dataset distribution shift. To advance subtle expression recognition, we contribute a Large-scale Subtle Emotions and Mental States in the Wild database (LSEMSW). LSEMSW includes a variety of cognitive states as well as basic emotions. It contains 176K images, manually annotated with 13 emotions, and thus provides the first subtle expression dataset large enough for training deep CNNs. Evaluations on LSEMSW and 300-W (landmark) databases show the effectiveness of the proposed methods. In addition, we investigate transferring knowledge learned from LSEMSW database to traditional (non-subtle) expression recognition. We achieve very competitive performance on Oulu-Casia NIR&Vis and CK + databases via transfer learning.
机译:面部表情识别是一项热门任务。但是,很少有研究研究微妙的表情识别,这对于心理活动分析,欺骗检测等非常重要。我们通过开发多任务学习(MTL)方法来有效利用一方,通过卷积神经网络(ONN)解决微妙的表情识别。任务:面部标志检测。现有的MTL方法遵循共享CNN底层和特定于任务的顶层的设计模式。但是,共享架构通常是启发式选择的,因为很难决定应该共享哪些层。我们的方法由(1)一个新颖的MTL框架组成,该框架通过在张量跟踪范数正则化下的优化自动学习要共享的层,以及(2)不变表示学习方法,该方法使CNN可以利用不相交的数据集上定义的任务而不会遭受任何麻烦从数据集分布转移。为了提高微妙的表情识别能力,我们在Wild数据库(LSEMSW)中贡献了大规模的微妙情感和心理状态。 LSEMSW包括各种认知状态以及基本情绪。它包含176K图像,并用13种情感进行手动注释,因此提供了足够大的第一个微妙的表达数据集,可用于训练深层的CNN。对LSEMSW和300-W(地标)数据库的评估表明了所提出方法的有效性。此外,我们研究了从LSEMSW数据库中学习到的知识到传统(非微妙)表达识别的转移。通过转移学习,我们在Oulu-Casia NIR&Vis和CK +数据库上取得了非常有竞争力的表现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号