首页> 外文期刊>International Journal of Advanced Robotic Systems >Fast depth extraction from a single image:
【24h】

Fast depth extraction from a single image:

机译:从单个图像中快速提取深度:

获取原文
           

摘要

Predicting depth from a single image is an important problem for understanding the 3-D geometry of a scene. Recently, the nonparametric depth sampling (DepthTransfer) has shown great potential in solving this problem, and its two key components are a Scale Invariant Feature Transform (SIFT) flowa??based depth warping between the input image and its retrieved similar images and a pixel-wise depth fusion from all warped depth maps. In addition to the inherent heavy computational load in the SIFT flow computation even under a coarse-to-fine scheme, the fusion reliability is also low due to the low discriminativeness of pixel-wise description nature. This article aims at solving these two problems. First, a novel sparse SIFT flow algorithm is proposed to reduce the complexity from subquadratic to sublinear. Then, a reweighting technique is introduced where the variance of the SIFT flow descriptor is computed at every pixel and used for reweighting the data term in the conditional Markov random fields. Our proposed depth transfer method is tested on the Make3D Range Image Data and NYU Depth Dataset V2. It is shown that, with comparable depth estimation accuracy, our method is 2a??3 times faster than the DepthTransfer.
机译:从单个图像预测深度是了解场景的3D几何形状的重要问题。最近,非参数深度采样(DepthTransfer)在解决此问题方面显示出巨大潜力,其两个关键组成部分是输入图像与其检索到的相似图像和像素之间基于尺度不变特征变换(SIFT)flowa ??的深度扭曲。所有变形深度图的深度融合。除了即使在从粗到细的方案下,SIFT流计算中固有的繁重计算负担之外,由于像素方式描述性质的低判别性,融合可靠性也很低。本文旨在解决这两个问题。首先,提出了一种新的稀疏SIFT流算法,以降低复杂度,从次二次到次线性。然后,引入一种重新加权技术,其中在每个像素处计算SIFT流描述符的方差,并用于对条件Markov随机字段中的数据项进行重新加权。我们提出的深度传输方法已在Make3D范围图像数据和NYU深度数据集V2上进行了测试。结果表明,在具有可比的深度估计精度的情况下,我们的方法比DepthTransfer快2a ?? 3倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号