首页> 外文期刊>IEEE Transactions on Image Processing >Fast and Accurate Depth Estimation From Sparse Light Fields
【24h】

Fast and Accurate Depth Estimation From Sparse Light Fields

机译:稀疏光场的快速准确深度估计

获取原文
获取原文并翻译 | 示例
       

摘要

We present a fast and accurate method for dense depth reconstruction, which is specifically tailored to process sparse, wide-baseline light field data captured with camera arrays. In our method, the source images are over-segmented into non-overlapping compact superpixels. We model superpixel as planar patches in the image space and use them as basic primitives for depth estimation. Such superpixel-based representation yields desired reduction in both memory and computation requirements while preserving image geometry with respect to the object contours. The initial depth maps, obtained by plane-sweeping independently for each view, are jointly refined via iterative belief-propagation-like optimization in superpixel domain. During the optimization, smoothness between the neighboring superpixels and geometric consistency between the views are enforced. To ensure rapid information propagation into textureless and occluded regions, together with the immediate superpixel neighbors, candidates from larger neighborhoods are sampled. Additionally, in order to make full use of the parallel graphics hardware a synchronous message update schedule is employed allowing to process all the superpixels of all the images at once. This way, the distribution of the scene geometry becomes distinctive already after the first iterations, facilitating stability and fast convergence of the refinement procedure. We demonstrate that a few refinement iterations result in globally consistent dense depth maps even in the presence of wide textureless regions and occlusions. The experiments show that while the depth reconstruction takes about a second per full high-definition view, the accuracy of the obtained depth maps is comparable with the state-of-the-art results, which otherwise require much longer processing time.
机译:我们为密集深度重建提供了一种快速准确的方法,专门定制了采用相机阵列捕获的稀疏,宽基线光场数据。在我们的方法中,源图像被过度分段为非重叠的紧凑型超像素。我们将SuperPixel模拟为图像空间中的平面补丁,并使用它们作为深度估计的基本原语。这种基于SuperPixel的表示产生了期望的存储器和计算要求,同时保留了相对于物体轮廓的图像几何形状。通过为每个视图独立地扫描而获得的初始深度图是通过Superpixel域中的迭代信念传播的优化共同改进。在优化期间,强制执行相邻超像素和视图之间的几何一致性之间的平滑度。为了确保快速信息传播到Textullifuelless和Occluded地区,以及直接超级奇数邻居,来自较大社区的候选者被采样。另外,为了充分利用并行图形硬件,采用同步消息更新计划,允许一次处理所有图像的所有超像素。这样,在第一次迭代之后,场景几何的分布变得独特,促进了细化过程的稳定性和快速收敛性。我们展示了一些细化迭代,即使在存在宽的Textullesel区域和闭塞状态下也会导致全球一致的密集深度图。实验表明,虽然深度重建每全高清视图大约需要较低的秒,但是所获得的深度图的准确性与最先进的结果相当,否则需要更长的处理时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号