首页> 外文会议>IEEE/RSJ International Conference on Intelligent Robots and Systems >Online depth calibration for RGB-D cameras using visual SLAM
【24h】

Online depth calibration for RGB-D cameras using visual SLAM

机译:使用视觉SLAM对RGB-D摄像机进行在线深度校准

获取原文

摘要

Modern consumer RGB-D cameras are affordable and provide dense depth estimates at high frame rates. Hence, they are popular for building dense environment representations. Yet, the sensors often do not provide accurate depth estimates since the factory calibration exhibits a static deformation. We present a novel approach to online depth calibration that uses a visual SLAM system as reference for the measured depth. A sparse map is generated and the visual information is used to correct the static deformation of the measured depth while missing data is extrapolated using a small number of thin plate splines (TPS). The corrected depth can then be used to improve the accuracy of the sparse RGB-D map and the 3D environment reconstruction. As more data becomes available, the depth calibration is updated on the fly. Our method does not rely on a planar geometry like walls or a one-to-one-pixel correspondence between color and depth camera. Our approach is evaluated in real-world scenarios and against ground truth data. Comparison against two popular self-calibration methods is performed. Furthermore, we show clear visual improvement on aggregated point clouds with our method.
机译:现代消费者RGB-D相机是实惠的,在高帧速率下提供密集的深度估计。因此,它们是建立密集的环境陈述的欢迎。然而,传感器通常不提供精确的深度估计,因为工厂校准表现出静态变形。我们提出了一种新的在线深度校准方法,该方法使用可视来自视觉SLAM系统作为测量深度的参考。生成稀疏图,并且使用少量薄板样条(TPS)外推缺少数据时,使用可视信息来校正测量深度的静态变形。然后可以使用校正深度来提高稀疏RGB-D图和3D环境重建的准确性。随着更多数据可用,在飞行中更新深度校准。我们的方法不依赖于墙壁的平面几何形状或颜色和深度相机之间的一对一像素对应关系。我们的方法是在现实世界场景和地面真理数据中进行评估。执行针对两个流行的自校准方法的比较。此外,我们对我们的方法显示了对聚合点云的明确视觉改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号