【24h】

Video Based Reconstruction of 3D People Models

机译:基于视频的3D人模型重构

获取原文

摘要

This paper describes a method to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline to infer 3D model shapes including clothed people with 4.5mm reconstruction accuracy. At the core of our approach is the transformation of dynamic body pose into a canonical frame of reference. Our main contribution is a method to transform the silhouette cones corresponding to dynamic human silhouettes to obtain a visual hull in a common reference frame. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. Results on 4 different datasets demonstrate the effectiveness of our approach to produce accurate 3D models. Requiring only an RGB camera, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.
机译:本文介绍了一种方法,该方法可从单个人在其中移动的单目视频中获取准确的3D人体模型和任意人的纹理。基于参数化人体模型,我们提出了一个强大的处理流程来推断3D模型形状,包括具有4.5mm重建精度的穿着衣服的人。我们方法的核心是将动态人体姿势转换为规范的参考框架。我们的主要贡献是一种方法,可以变换与动态人体轮廓相对应的轮廓锥,以在公共参考系中获得视觉船体。这使得能够基于大量帧有效地估计一致的3D形状,纹理和植入的动画骨架。在4个不同的数据集上的结果证明了我们的方法产生精确3D模型的有效性。我们的方法仅需要RGB相机,使每个人都可以创建自己的完全可动画制作的数字双镜头,例如用于社交VR应用程序或用于在线时尚购物的虚拟试戴。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号