首页> 外文期刊>ACM Transactions on Graphics >Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation
【24h】

Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation

机译:实时面部跟踪和动画的位移动态表达回归

获取原文
获取原文并翻译 | 示例
       

摘要

We present a fully automatic approach to real-time facial trackingrnand animation with a single video camera. Our approach does notrnneed any calibration for each individual user. It learns a genericrnregressor from public image datasets, which can be applied to anyrnuser and arbitrary video cameras to infer accurate 2D facial landmarksrnas well as the 3D facial shape from 2D video frames. Therninferred 2D landmarks are then used to adapt the camera matrixrnand the user identity to better match the facial expressions of therncurrent user. The regression and adaptation are performed in an alternatingrnmanner. With more and more facial expressions observedrnin the video, the whole process converges quickly with accurate facialrntracking and animation. In experiments, our approach demonstratesrna level of robustness and accuracy on par with state-of-theartrntechniques that require a time-consuming calibration step forrneach individual user, while running at 28 fps on average. We considerrnour approach to be an attractive solution for wide deploymentrnin consumer-level applications.
机译:我们提出了一种使用单个摄像机进行实时面部跟踪和动画的全自动方法。我们的方法不需要为每个用户进行任何校准。它从公共图像数据集中学习通用回归器,该回归器可应用于任何用户和任意摄像机,以从2D视频帧中推断出准确的2D面部界标和3D面部形状。然后,使用推断出的2D界标来调整相机矩阵和用户身份,以更好地匹配当前用户的面部表情。回归和适应以交替方式进行。随着视频中观察到越来越多的面部表情,整个过程可以通过准确的面部跟踪和动画快速收敛。在实验中,我们的方法展示了鲁棒性和准确性,这与最新技术水平相当,后者需要耗时的校准步骤来识别每个用户,而平均速度为28 fps。我们认为,对于在消费者级别的应用程序中进行广泛部署而言,rnour方法是一种有吸引力的解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号