首页> 外文期刊>Journal of machine learning research >A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives
【24h】

A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives

机译:人类对情感面部表情的感知模型:研究概述和观点

获取原文
           

摘要

In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify facial expressions of emotion---the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between. While the continuous model has a more difficult time justifying this latter finding, the categorical model is not as good when it comes to explaining how expressions are recognized at different intensities or modes. Most importantly, both models have problems explaining how one can recognize combinations of emotion categories such as happily surprised versus angrily surprised versus surprise. To resolve these issues, in the past several years, we have worked on a revised model that justifies the results reported in the cognitive science and neuroscience literature. This model consists of C distinct continuous spaces. Multiple (compound) emotion categories can be recognized by linearly combining these C face spaces. The dimensions of these spaces are shown to be mostly configural. According to this model, the major task for the classification of facial expressions of emotion is precise, detailed detection of facial landmarks rather than recognition. We provide an overview of the literature justifying the model, show how the resulting model can be employed to build algorithms for the recognition of facial expression of emotion, and propose research directions in machine learning and computer vision researchers to keep pushing the state of the art in these areas. We also discuss how the model can aid in studies of human perception, social interactions and disorders. color="gray">
机译:在认知科学和神经科学中,存在两种描述人类如何感知和分类情感表情的领先模型-连续模型和分类模型。连续模型将情感的每个面部表情定义为面部空间中的特征向量。例如,该模型说明了如何在不同强度下看到情感表达。相反,分类模型由 C 个分类器组成,每个分类器都调整到特定的情感类别。该模型除其他发现外,解释了为什么高兴和惊奇的面孔之间的变形序列中的图像被认为是高兴或惊奇,而不是两者之间的东西。尽管连续模型很难证明后面的发现,但在解释如何在不同强度或模式下识别表情时,分类模型并不那么好。最重要的是,这两种模型都存在问题,无法解释人们如何识别情绪类别的组合,例如快乐惊讶,愤怒惊讶和惊讶。为了解决这些问题,在过去的几年中,我们研究了一种修正的模型,该模型证明了认知科学和神经科学文献中报道的结果是合理的。该模型由 C 个不同的连续空间组成。通过线性组合这些 C 面部空间,可以识别多个(复合)情感类别。这些空间的尺寸显示大部分是结构性的。根据该模型,对情绪面部表情进行分类的主要任务是精确,详细地检测面部标志而不是识别。我们提供了证明该模型合理性的文献综述,展示了如何将所得模型用于构建识别情绪面部表情的算法,并提出了机器学习和计算机视觉研究人员的研究方向,以推动最新技术发展在这些地区。我们还将讨论该模型如何帮助研究人类的知觉,社交互动和障碍。 color =“ gray”>

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号