...
首页> 外文期刊>Multimedia Tools and Applications >From 2D to 3D real-time expression transfer for facial animation
【24h】

From 2D to 3D real-time expression transfer for facial animation

机译:从2D到3D实时表情转换以进行面部动画

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this paper, we present a three-stage approach, which creates realistic facial animations by tracking expressions of a human face in 2D and transferring them to a human-like 3D model in real-time. Our calibration-free method, which is based on an average human face, does not require training. The tracking is performed using a single camera to enable several practical applications, for example, using tablets and mobile devices, and the expressions are transferred with a joint-based system to improve the quality and persuasiveness of animations. In the first step of the method, a joint-based facial rig providing mobility to pseudo-muscles is attached to the 3D model. The second stage covers the tracking of 2D positions of the facial landmarks from a single camera view and transfer of 3D relative movement data to move the respective joints on the model. The last step includes the recording of animation using a partially automated key-framing technique. Experiments on the extended Cohn-Kanade dataset using peak frames in frontal-view videos have shown that the presented method produces visually satisfying facial animations.
机译:在本文中,我们提出了一种三阶段方法,该方法通过跟踪2D中人脸的表情并将其实时传输到类似人的3D模型中来创建逼真的面部动画。我们的免校准方法基于一般的人脸,不需要培训。使用单个摄像机执行跟踪,以实现多种实际应用,例如使用平板电脑和移动设备,并使用基于关节的系统传输表情,以提高动画的质量和说服力。在该方法的第一步中,将为伪肌肉提供移动性的基于关节的面部装备连接到3D模型。第二阶段包括从单个摄像机视图跟踪面部界标的2D位置以及3D相对运动数据的传输,以在模型上移动各个关节。最后一步包括使用部分自动的关键帧技术录制动画。使用前视视频中的峰值帧对扩展的Cohn-Kanade数据集进行的实验表明,该方法可产生令人满意的面部动画。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号