首页> 外文会议>IEEE International Conference on Computer Vision Workshops >Localizing Facial Keypoints with Global Descriptor Search, Neighbour Alignment and Locally Linear Models
【24h】

Localizing Facial Keypoints with Global Descriptor Search, Neighbour Alignment and Locally Linear Models

机译:使用全局描述符搜索,邻居对齐和局部线性模型对面部关键点进行本地化

获取原文

摘要

We present our technique for facial key point localization in the wild submitted to the 300-W challenge. Our approach begins with a nearest neighbour search using global descriptors. We then employ an alignment of local neighbours and dynamically fit a locally linear model to the global key point configurations of the returned neighbours. Neighbours are also used to define restricted areas of the input image in which we apply local discriminative classifiers. We then employ an energy function based minimization approach to combine local classifier predictions with the dynamically estimated joint key point configuration model. % Our method is able place 68 key points on in the wild facial imagery with an average localization error of less than 10% of the inter-ocular distance for almost 50% of the challenge test examples. Our model therein increased the yield of low error images over the baseline AAM result provided by the challenge organizers by a factor of 2.2 for the 68 key point challenge. Our method improves the 51 key point baseline result by a factor of 1.95, yielding key points for more than 50% of the test examples with error of less than 10% of inter-ocular distance.
机译:我们提出了在野外进行300 W挑战的面部关键点定位技术。我们的方法从使用全局描述符的最近邻居搜索开始。然后,我们采用本地邻居的对齐方式,并将本地线性模型动态地拟合到返回邻居的全局关键点配置中。邻居还用于定义输入图像的受限区域,在这些区域中我们应用了本地判别式分类器。然后,我们采用基于能量函数的最小化方法,将局部分类器预测与动态估计的联合关键点配置模型相结合。 %我们的方法能够在野外面部图像中放置68个关键点,对于几乎50%的挑战测试示例,平均定位误差均小于眼间距离的10%。在68个关键点挑战中,我们的模型将挑战组织者提供的基线AAM结果的低错误图像产量提高了2.2倍。我们的方法将51个关键点的基线结果提高了1.95倍,从而为超过50%的测试示例提供了关键点,并且误差小于眼间距离的10%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号