首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >Facial expression recognition in dynamic sequences: An integrated approach
【24h】

Facial expression recognition in dynamic sequences: An integrated approach

机译:动态序列中的面部表情识别:一种集成方法

获取原文
获取原文并翻译 | 示例
           

摘要

Automatic facial expression analysis aims to analyse human facial expressions and classify them into discrete categories. Methods based on existing work are reliant on extracting information from video sequences and employ either some form of subjective thresholding of dynamic information or attempt to identify the particular individual frames in which the expected behaviour occurs. These methods are inefficient as they require either additional subjective information, tedious manual work or fail to take advantage of the information contained in the dynamic signature from facial movements for the task of expression recognition. In this paper, a novel framework is proposed for automatic facial expression analysis which extracts salient information from video sequences but does not rely on any subjective preprocessing or additional user-supplied information to select frames with peak expressions. The experimental framework demonstrates that the proposed method outperforms static expression recognition systems in terms of recognition rate. The approach does not rely on action units (AUs), and therefore, eliminates errors which are otherwise propagated to the final result due to incorrect initial identification of AUs. The proposed framework explores a parametric space of over 300 dimensions and is tested with six state-of-the-art machine learning techniques. Such robust and extensive experimentation provides an important foundation for the assessment of the performance for future work. A further contribution of the paper is offered in the form of a user study. This was conducted in order to investigate the correlation between human cognitive systems and the proposed framework for the understanding of human emotion classification and the reliability of public databases.
机译:自动面部表情分析旨在分析人的面部表情并将其分类为离散类别。基于现有工作的方法依赖于从视频序列中提取信息,并采用某种形式的动态信息主观阈值化或尝试识别发生预期行为的特定单个帧。这些方法效率低下,因为它们需要附加的主观信息,繁琐的手工工作,或者无法利用面部动作的动态签名中包含的信息来进行表情识别。在本文中,提出了一种用于自动面部表情分析的新颖框架,该框架从视频序列中提取显着信息,但是不依赖于任何主观预处理或其他用户提供的信息来选择具有峰值表情的帧。实验框架表明,该方法在识别率方面优于静态表达识别系统。该方法不依赖动作单元(AU),因此,消除了由于不正确的AU初始标识而传播到最终结果的错误。拟议的框架探索了超过300个维度的参数空间,并通过六种最新的机器学习技术进行了测试。这种强大而广泛的实验为评估未来工作的性能提供了重要的基础。本文的进一步贡献是以用户研究的形式提供的。这样做是为了调查人类认知系统与为理解人类情感分类和公共数据库的可靠性而提出的框架之间的相关性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号