首页> 外文会议>Robotics and Automation (ICRA), 2012 IEEE International Conference on >High-resolution depth maps based on TOF-stereo fusion
【24h】

High-resolution depth maps based on TOF-stereo fusion

机译:基于TOF-立体融合的高分辨率深度图

获取原文
获取原文并翻译 | 示例

摘要

The combination of range sensors with color cameras can be very useful for robot navigation, semantic perception, manipulation, and telepresence. Several methods of combining range- and color-data have been investigated and successfully used in various robotic applications. Most of these systems suffer from the problems of noise in the range-data and resolution mismatch between the range sensor and the color cameras, since the resolution of current range sensors is much less than the resolution of color cameras. High-resolution depth maps can be obtained using stereo matching, but this often fails to construct accurate depth maps of weakly/repetitively textured scenes, or if the scene exhibits complex self-occlusions. Range sensors provide coarse depth information regardless of presence/absence of texture. The use of a calibrated system, composed of a time-of-flight (TOF) camera and of a stereoscopic camera pair, allows data fusion thus overcoming the weaknesses of both individual sensors. We propose a novel TOF-stereo fusion method based on an efficient seed-growing algorithm which uses the TOF data projected onto the stereo image pair as an initial set of correspondences. These initial “seeds” are then propagated based on a Bayesian model which combines an image similarity score with rough depth priors computed from the low-resolution range data. The overall result is a dense and accurate depth map at the resolution of the color cameras at hand. We show that the proposed algorithm outperforms 2D image-based stereo algorithms and that the results are of higher resolution than off-the-shelf color-range sensors, e.g., Kinect. Moreover, the algorithm potentially exhibits real-time performance on a single CPU.
机译:范围传感器与彩色摄像机的结合对于机器人导航,语义感知,操纵和远程呈现非常有用。已经研究了几种组合距离和颜色数据的方法,并已成功地用于各种机器人应用中。由于电流范围传感器的分辨率远小于彩色摄像机的分辨率,因此这些系统中的大多数系统都遭受范围数据中的噪声和范围传感器与彩色摄像机之间的分辨率不匹配的问题。可以使用立体匹配来获得高分辨率的深度图,但是这通常无法构造出弱/重复纹理场景的准确深度图,或者如果场景表现出复杂的自遮挡效果。范围传感器提供粗略的深度信息,而不管纹理的存在与否。由飞行时间(TOF)摄像头和立体摄像头对组成的校准系统的使用可以实现数据融合,从而克服了两个传感器的缺点。我们提出了一种基于有效种子生长算法的新颖TOF-立体融合方法,该方法使用投影到立体图像对上的TOF数据作为初始对应关系集。然后基于贝叶斯模型传播这些初始“种子”,该模型将图像相似性评分与根据低分辨率范围数据计算出的粗糙深度先验值相结合。总体结果是在手边的彩色相机的分辨率下获得了密集而准确的深度图。我们表明,所提出的算法优于基于2D图像的立体算法,并且结果比现有的色彩范围传感器(例如Kinect)具有更高的分辨率。此外,该算法可能在单个CPU上表现出实时性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号