首页> 外文期刊>Pattern recognition letters >A robust framework for tracking simultaneously rigid and non-rigid face using synthesized data
【24h】

A robust framework for tracking simultaneously rigid and non-rigid face using synthesized data

机译:使用合成数据同时跟踪刚性和非刚性面的强大框架

获取原文
获取原文并翻译 | 示例

摘要

This paper presents a robust framework for simultaneously tracking rigid pose and non rigid animation of a single face with a monocular camera. Our proposed method consists of two phases: training and tracking. In the training phase, using automatically detected landmarks and the three-dimensional face model Candide-3, we built a cohort of synthetic face examples with a large range of the three axial rotations. The representation of a face's appearance is a set of local patches of landmarks that are characterized by Scale Invariant Feature Transform (SIFT) descriptors. In the tracking phase, we propose an original approach combining geometric and appearance models. The purpose of the geometric model is to provide a SIFT baseline matching between the current frame and an adaptive set of keyframes for rigid parameter estimation. The appearance model uses nearest synthetic examples of the training set to re-estimate rigid and non-rigid parameters. We found a tracking capability up to 90 degrees of vertical axial rotation, and our method is robust even in the presence of fast movements, illumination changes and tracking losses. Numerical results on the rigid and non-rigid parameter sets are reported using several annotated public databases. Compared to other published algorithms, our method provides an excellent compromise between rigid and non-rigid parameter accuracies. The approach has some potential, providing good pose estimation (average error less than 4 on the Boston University Face Tracking dataset) and landmark tracking precision (6.3 pixel error compared to 6.8 of one of state-of-the-art methods on Talking Face video). (C) 2015 Elsevier B.V. All rights reserved.
机译:本文提出了一个健壮的框架,可通过单眼相机同时跟踪单张面孔的刚性姿势和非刚性动画。我们提出的方法包括两个阶段:训练和跟踪。在训练阶段,使用自动检测到的界标和三维人脸模型Candide-3,我们建立了一组合成人脸示例,其中三个轴向旋转范围很大。面部外观的表示形式是一组局部的地标,其特征是尺度不变特征变换(SIFT)描述符。在跟踪阶段,我们提出了一种结合几何和外观模型的原始方法。几何模型的目的是在当前帧和一组自适应关键帧之间提供SIFT基线匹配,以进行刚性参数估计。外观模型使用训练集的最近综合示例来重新估计刚性和非刚性参数。我们发现了高达90度的垂直轴向旋转的跟踪能力,即使在快速移动,光照变化和跟踪损耗的情况下,我们的方法也很可靠。使用几个带注释的公共数据库报告了刚性和非刚性参数集的数值结果。与其他已发布的算法相比,我们的方法在刚性和非刚性参数精度之间提供了出色的折衷方案。该方法具有一定的潜力,可提供良好的姿势估计(波士顿大学人脸跟踪数据集的平均误差小于4)和界标跟踪精度(6.3像素误差,而Talking Face视频的最新技术之一为6.8) )。 (C)2015 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号