首页> 外文期刊>Signal processing >3D real human reconstruction via multiple low-cost depth cameras
【24h】

3D real human reconstruction via multiple low-cost depth cameras

机译:通过多个低成本深度相机进行3D真实人类重建

获取原文
获取原文并翻译 | 示例
           

摘要

In traditional human-centered games and virtual reality applications, a skeleton is commonly tracked using consumer-level cameras or professional motion capture devices to animate an avatar. In this paper, we propose a novel application that automatically reconstructs a real 3D moving human captured by multiple RGB-D cameras in the form of a polygonal mesh, and which may help users to actually enter a virtual world or even a collaborative immersive environment. Compared with 3D point clouds, a 3D polygonal mesh is commonly adopted to represent objects or characters in games and virtual reality applications. A vivid 3D human mesh can hugely promote the feeling of immersion when interacting with a computer. The proposed method includes three key steps for realizing dynamic 3D human reconstruction from RGB images and noisy depth data captured from a distance. First, we remove the static background to obtain a 3D partial view of the human from the depth data with the help of calibration parameters, and register two neighboring partial views. The whole 3D human is globally registered using all the partial views to obtain a relatively clean 3D human point cloud. A complete 3D mesh model is constructed from the point cloud using Delaunay triangulation and Poisson surface reconstruction. Finally, a series of experiments demonstrates the reconstruction quality of the 3D human meshes. Dynamic meshes with different poses are placed in a virtual environment, which can be used to provide personalized avatars for everyday users, and enhance the interactive experience in games and virtual reality environments.
机译:在传统的以人为中心的游戏和虚拟现实应用程序中,通常使用消费者级别的摄像机或专业运动捕捉设备来跟踪骨骼以对化身进行动画处理。在本文中,我们提出了一种新颖的应用程序,该应用程序可以自动重建由多个RGB-D摄像机以多边形网格形式捕获的真实3D移动人,并且可以帮助用户实际进入虚拟世界甚至协作沉浸式环境。与3D点云相比,通常在游戏和虚拟现实应用程序中采用3D多边形网格来表示对象或角色。与计算机交互时,生动的3D人体网格可以极大地促进沉浸感。所提出的方法包括三个关键步骤,用于根据RGB图像和从远处捕获的嘈杂深度数据实现动态3D人体重建。首先,我们移除静态背景,借助校准参数从深度数据中获得人的3D局部视图,并注册两个相邻的局部视图。使用所有局部视图对整个3D人类进行全局注册,以获得相对干净的3D人类点云。使用Delaunay三角剖分和Poisson曲面重构,从点云构建完整的3D网格模型。最后,一系列实验证明了3D人体网格的重建质量。具有不同姿势的动态网格物体放置在虚拟环境中,可用于为日常用户提供个性化头像,并增强游戏和虚拟现实环境中的交互体验。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号