首页> 外文期刊>International Journal of Computer Vision >Towards Unrestrained Depth Inference with Coherent Occlusion Filling
【24h】

Towards Unrestrained Depth Inference with Coherent Occlusion Filling

机译:通过相干遮挡填充实现不受限制的深度推断

获取原文
获取原文并翻译 | 示例
           

摘要

Traditional depth estimation methods typically exploit the effect of either the variations in internal parameters such as aperture and focus (as in depth from defocus), or variations in extrinsic parameters such as position and orientation of the camera (as in stereo). When operating off-the-shelf (OTS) cameras in a general setting, these parameters influence the depth of field (DOF) and field of view (FOV). While DOF mandates one to deal with defocus blur, a larger FOV necessitates camera motion during image acquisition. As a result, for unfettered operation of an OTS camera, it becomes inevitable to account for pixel motion as well as optical defocus blur in the captured images. We propose a depth estimation framework using calibrated images captured under general camera motion and lens parameter variations. Our formulation seeks to generalize the constrained areas of stereo and shape from defocus (SFD)/focus (SFF) by handling, in tandem, various effects such as focus variation, zoom, parallax and stereo occlusions, all under one roof. One of the associated challenges in such an unrestrained scenario is the problem of removing user-defined foreground occluders in the reference depth map and image (termed inpainting of depth and image). Inpainting is achieved by exploiting the cue from motion parallax to discover (in other images) the correspondence/color information missing in the reference image. Moreover, considering the fact that the observations could be differently blurred, it is important to ensure that the degree of defocus in the missing regions (in the reference image) is coherent with the local neighbours (defocus inpainting).
机译:传统的深度估计方法通常会利用内部参数(例如光圈和焦点)的变化(如散焦的深度)或外部参数(例如相机的位置和方向)的变化(如立体声)的影响。在常规设置中操作现成(OTS)摄像机时,这些参数会影响景深(DOF)和视场(FOV)。尽管自由度(DOF)要求处理散焦模糊,但较大的FOV要求在图像获取过程中使摄像机运动。结果,为了不受约束地操作OTS相机,在拍摄的图像中考虑像素运动以及光学散焦模糊变得不可避免。我们提出了一个深度估计框架,该框架使用在一般摄像机运动和镜头参数变化下捕获的校准图像。我们的公式旨在通过一并处理所有效果,例如聚焦变化,缩放,视差和立体遮挡,将所有这些都集中在一个屋顶下,从而从散焦(SFD)/聚焦(SFF)来概括立体和形状的受约束区域。在这种不受限制的情况下,相关的挑战之一是在参考深度图和图像(称为深度和图像的修补)中删除用户定义的前景遮挡物的问题。通过利用运动视差的提示来发现(在其他图像中)参考图像中缺少的对应信息/颜色信息,可以实现修复。此外,考虑到观察结果可能会模糊不清的事实,重要的是要确保缺失区域(参考图像中)的散焦程度与本地邻居(散焦修复)一致。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号