首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >Reconstructing Non-stationary Articulated Objects in Monocular Video using Silhouette Information
【24h】

Reconstructing Non-stationary Articulated Objects in Monocular Video using Silhouette Information

机译:使用剪影信息重建单像素视频中的非平稳铰接物体

获取原文

摘要

This paper presents an approach to reconstruct non-stationary, articulated objects from silhouettes obtained with a monocular video sequence. We introduce the concept of motion blurred scene occupancies, a direct analogy of motion blurred images but in a 3D object scene occupancy space resulting from the motion/deformation of the object. Our approach starts with an image based fusion step that combines color and silhouette information from multiple views. To this end we propose to use a novel construct: the temporal occupancy point (TOP), which is the estimated 3D scene location of a silhouette pixel and contains information about duration of time it is occupied. Instead of explicitly computing the TOP in 3D space we directly obtain it's imaged (projected) locations in each view. This enables us to handle monocular video and arbitrary camera motion in scenarios where complete camera calibration information may not be available. The result is a set of blurred scene occupancy images in the corresponding views, where the values at each pixel correspond to the fraction of total time duration that the pixel observed an occupied scene location. We then use a motion de-blurring approach to de-blur the occupancy images. The de-blurred occupancy images correspond to a silhouettes of the mean/motion compensated object shape and are used to obtain a visual hull reconstruction of the object. We show promising results on challenging monocular datasets of deforming objects where traditional visual hull intersection approaches fail to reconstruct the object correctly.
机译:本文介绍了一种从用单眼视频序列获得的剪影重建非静止的铰接物体的方法。我们介绍了运动模糊占用的概念,这是一个直接的运动模糊图像,但在3D对象场景中占用空间,由对象的运动/变形产生。我们的方法从基于图像的融合步骤开始,将颜色和剪影信息与多视图相结合。为此,我们建议使用新建构造:时间占用点(顶部),其是轮廓像素的估计3D场景位置,并包含其占用时间的持续时间的信息。而不是明确计算3D空间中的顶部,我们直接在每个视图中获得它的成像(投影)位置。这使我们能够在可能不可用的情况下处理单目一体视频和任意摄像机运动。结果是在相应视图中的一组模糊场景占用图像,其中每个像素处的值对应于像素观察占用场景位置的总时间持续时间的分数。然后,我们使用运动去模糊方法来消除占用图像。去模糊的占用图像对应于平均/运动补偿物体形状的轮廓,并且用于获得物体的视觉船体重建。我们展示了有前途的结果,挑战变形物体的单曲数据集,其中传统的视觉船体交叉路口方法无法正确重建对象。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号