首页> 外文会议>International Conference on 3D Vision >Synthetic Prior Design for Real-Time Face Tracking
【24h】

Synthetic Prior Design for Real-Time Face Tracking

机译:实时人脸跟踪的综合先验设计

获取原文

摘要

Real-time facial performance capture has recently been gaining popularity in virtual film production, driven by advances in machine learning, which allows for fast inference of facial geometry from video streams. These learning-based approaches are significantly influenced by the quality and amount of labelled training data. Tedious construction of training sets from real imagery can be replaced by rendering a facial animation rig under on-set conditions expected at runtime. We learn a synthetic actor-specific prior by adapting a state-of-the-art facial tracking method. Synthetic training significantly reduces the capture and annotation burden and in theory allows generation of an arbitrary amount of data. But practical realities such as training time and compute resources still limit the size of any training set. We construct better and smaller training sets by investigating which facial image appearances are crucial for tracking accuracy, covering the dimensions of expression, viewpoint and illumination. A reduction of training data in 1-2 orders of magnitude is demonstrated whilst tracking accuracy is retained for challenging on-set footage.
机译:由于机器学习的进步,实时面部表情捕获在虚拟电影制作中已变得越来越流行,这种机器学习可以从视频流中快速推断出面部的几何形状。这些基于学习的方法受标记的培训数据的质量和数量的影响很大。可以通过在运行时预期的设置条件下渲染面部动画装备来替换从真实图像中繁琐的训练集构建。我们通过采用最新的面部跟踪方法来学习特定于合成演员的先验知识。综合训练显着减少了捕获和注释负担,并且理论上允许生成任意数量的数据。但是,诸如培训时间和计算资源之类的实际情况仍然限制了任何培训集的规模。我们通过调查哪些面部图像外观对于跟踪精度至关重要,包括表情,视点和照明的维度,来构造更好和更小的训练集。演示了将训练数据减少1-2个数量级的同时,保留了跟踪精度,以应对具有挑战性的现场镜头。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号