首页> 外文会议>IEEE Conference on Applications of Computer Vision >A Practical Approach to Real-Time Neutral Feature Subtraction for Facial Expression Recognition
【24h】

A Practical Approach to Real-Time Neutral Feature Subtraction for Facial Expression Recognition

机译:实时中性特征减法对面部表情识别的实用方法

获取原文
获取外文期刊封面目录资料

摘要

Methods for automated facial expression recognition - identifying faces as happy sad, angry, etc. - typically rely on the classification of features extracted from images. These features, designed to encode shape and texture information, depend on both (1) the expression an individual is making, and (2) the individual's physical characteristics and lighting conditions of the image. To reduce the effect of (2), a common strategy is to establish a "baseline" for an individual and subtract out this individual's baseline neutral feature. This extra neutral feature information often is not available - in particular for in-the-wild, real-time classification of a previously unseen subject. Thus, in order to implement "neutral subtraction," one must estimate the individual's neutral feature. Existing methods to do this are susceptible to class imbalance at test time (e.g., averaging over all facial features), require a more complex model specific to the individual to be trained, or are restricted to features computed entirely from tracked landmark points (taking advantage of a subset of "stable points" which move little as an individual emotes). We extend neutral subtraction to different computer vision feature spaces as a method to correct for inter-face and lighting variance. We further propose a simple, real-time method which is robust to class imbalance and in principal works over a wide class of feature choices. We test this method on feature extraction techniques that lead to high baseline accuracy without neutral subtraction (97% on the Extended Cohn-Kanade Dataset). We find that on difficult classification tasks our method recovers almost 2/3 of the ~ 8% gain shown by a "cheating" neutral-subtracted feature classifier, which uses examples that have been labeled as neutral, validating with both HOG and SIFT features.
机译:自动面部表情识别的方法 - 识别乐观悲伤,生气等的面临 - 通常依赖于从图像中提取的特征的分类。这些功能旨在编码形状和纹理信息,取决于(1)个体所做的表达式,以及(2)个人的物理特性和图像的照明条件。为了减少(2)的效果,共同的策略是为个人建立“基线”,并减去这个个体的基线中性特征。这种额外的中性特征信息通常不可用 - 特别是对于以前看不见的主题的野外实时分类。因此,为了实现“中性减法”,必须估计个人的中性特征。要执行此操作的现有方法易于在测试时间(例如,对所有面部特征的平均)来影响,需要特定于要训练的个人的更复杂的模型,或者仅限于完全从跟踪的地标点计算的功能(利用一个“稳定点”的子集,这一点作为个体情绪移动。我们将中性减法扩展到不同的计算机视觉功能空间作为校正面部间和照明方差的方法。我们进一步提出了一种简单的实时方法,它对类别的不平衡和主要作品在广泛的特征选择中具有强大的实时方法。我们在特征提取技术上测试该方法,这导致高基线精度,没有中性减法(扩展Cohn-Kanade数据集97%)。我们发现,在困难的分类任务上,我们的方法恢复了“作弊”中性 - 减去特征分类器所显示的〜8%增益的几乎2/3,该分类使用标记为中性的示例,验证了HOG和SIFT功能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号