...
首页> 外文期刊>Pattern Analysis and Machine Intelligence, IEEE Transactions on >Features versus Context: An Approach for Precise and Detailed Detection and Delineation of Faces and Facial Features
【24h】

Features versus Context: An Approach for Precise and Detailed Detection and Delineation of Faces and Facial Features

机译:特征与上下文:精确,详细地检测和描绘面部和面部特征的方法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensiv-n-ne experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.
机译:在过去几年中,基于外观的面部检测方法取得了长足的进步。通过这种方法,我们学习描述了要检测的对象类别(例如脸部)的纹理图案(外观)的图像统计信息。然而,这种方法在提供对内部面部特征即眼睛,眉毛,鼻子和嘴巴的准确和详细描述方面取得的成功有限。通常,这是由于学习到的统计模型所携带的信息有限。尽管脸部模板的纹理相对丰富,但是脸部特征(例如,眼睛,鼻子和嘴巴)没有携带足够的区分性信息来区别于所有可能的背景图像。我们通过在统计模型的设计中添加每个面部特征的上下文信息来解决此问题。在提出的方法中,上下文信息定义了与每个面部组件的周围环境最相关的图像统计信息。这意味着,当我们搜索面部或面部特征时,我们会寻找与特征最相似但与上下文无关的位置。与上下文特征的这种差异迫使检测器趋向于对面部特征位置的准确估计。然而,学习区分特征和上下文模板是困难的,因为面部特征的上下文和纹理在表情,姿势和光照变化下变化很大,甚至可能彼此相似。我们通过使用子类划分来解决这个问题。我们导出了两种算法来自动将每个面部特征的训练样本划分为一组子类,每个子类代表相同面部组件(例如,闭眼与睁开眼睛)或其上下文(例如,不同的发型)的不同构造。第一种算法基于判别分析公式。第二种算法是AdaBoost方法的扩展。我们使用静态图像和视频序列提供了总计3,930张图像的n-ne扩展实验结果。我们表明结果几乎与手动检测获得的结果一样好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号