首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >Visibility Constrained Generative Model for Depth-Based 3D Facial Pose Tracking
【24h】

Visibility Constrained Generative Model for Depth-Based 3D Facial Pose Tracking

机译:基于深度的3D面部姿势跟踪的可视性受限生成模型

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we propose a generative framework that unifies depth-based 3D facial pose tracking and face model adaptation on-the-fly, in the unconstrained scenarios with heavy occlusions and arbitrary facial expression variations. Specifically, we introduce a statistical 3D morphable model that flexibly describes the distribution of points on the surface of the face model, with an efficient switchable online adaptation that gradually captures the identity of the tracked subject and rapidly constructs a suitable face model when the subject changes. Moreover, unlike prior art that employed ICP-based facial pose estimation, to improve robustness to occlusions, we propose a ray visibility constraint that regularizes the pose based on the face model's visibility with respect to the input point cloud. Ablation studies and experimental results on Biwi and ICT-3DHP datasets demonstrate that the proposed framework is effective and outperforms completing state-of-the-art depth-based methods.
机译:在本文中,我们提出了一种生成框架,该框架可以在遮挡力很强且面部表情任意变化的不受约束的情况下,实时地将基于深度的3D面部姿势跟踪和面部模型自适应相结合。具体来说,我们引入了统计3D变形模型,该模型可以灵活地描述面部模型表面上点的分布,并具有有效的可切换在线适应功能,该功能可逐渐捕获被跟踪主体的身份并在主体发生变化时快速构建合适的面部模型。此外,与采用基于ICP的面部姿势估计的现有技术不同,为了提高遮挡的鲁棒性,我们提出了一种射线可见性约束,该约束基于面部模型相对于输入点云的可见性来规范姿势。在Biwi和ICT-3DHP数据集上进行的烧蚀研究和实验结果表明,所提出的框架是有效的,并且在完成最新的基于深度的方法方面要优于其他方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号