首页> 外文会议>International conference on advanced concepts for intelligent vision systems;ACIVS 2010 >Dynamic Facial Expression Recognition Using Boosted Component-Based Spatiotemporal Features and Multi-classifier Fusion
【24h】

Dynamic Facial Expression Recognition Using Boosted Component-Based Spatiotemporal Features and Multi-classifier Fusion

机译:基于增强的时空特征和多分类器融合的动态面部表情识别

获取原文

摘要

Feature extraction and representation are critical in facial expression recognition. The facial features can be extracted from either static images or dynamic image sequences. However, static images may not provide as much discriminative information as dynamic image sequences. On the other hand, from the feature extraction point of view, geometric features are often sensitive to the shape and resolution variations, whereas appearance based features may contain redundant information. In this paper, we propose a component-based facial expression recognition method by utilizing the spatiotemporal features extracted from dynamic image sequences, where the spatiotemporal features are extracted from facial areas centered at 38 detected fiducial interest points. Considering that not all features are important to the facial expression recognition, we use the AdaBoost algorithm to select the most discriminative features for expression recognition. Moreover, based on median rule, mean rule, and product rule of the classifier fusion strategy, we also present a framework for multi-classifier fusion to improve the expression classification accuracy. Experimental studies conducted on the Cohn-Kanade database show that our approach that combines both boosted component-based spatiotemporal features and multi-classifier fusion strategy provides a better performance for expression recognition compared with earlier approaches.
机译:特征提取和表示对于面部表情识别至关重要。可以从静态图像或动态图像序列中提取面部特征。但是,静态图像可能无法提供与动态图像序列一样多的判别信息。另一方面,从特征提取的角度来看,几何特征通常对形状和分辨率变化敏感,而基于外观的特征可能包含冗余信息。在本文中,我们提出了一种利用从动态图像序列中提取的时空特征的基于组件的面部表情识别方法,其中时空特征是从以38个基准点为中心的面部区域中提取的。考虑到并非所有功能都对面部表情识别很重要,因此我们使用AdaBoost算法来选择最具区分性的表情识别功能。此外,基于分类器融合策略的中值法则,均值法则和乘积法则,我们还提出了一种多分类器融合的框架,以提高表达分类的准确性。在Cohn-Kanade数据库上进行的实验研究表明,与早期方法相比,将基于增强成分的时空特征和多分类器融合策略相结合的方法为表情识别提供了更好的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号