首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps
【24h】

Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps

机译:用于高精度深度图的飞行时间深度和立体声的融合

获取原文

摘要

Time-of-flight range sensors have error characteristics which are complementary to passive stereo. They provide real time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy and often perform poorly on the textured scenes for which stereo excels. We introduce a method for combining the results from both methods that performs better than either alone. A depth probability distribution function from each method is calculated and then merged. In addition, stereo methods have long used global methods such as belief propagation and graph cuts to improve results, and we apply these methods to this sensor. Since time-of-flight devices have primarily been used as individual sensors, they are typically poorly calibrated. We introduce a method that substantially improves upon the manufacturer's calibration. We show that these techniques lead to improved accuracy and robustness.
机译:飞行时间范围传感器具有与被动立体声互补的误差特性。它们在被动立体声不起作用的条件下提供实时深度估计,例如在白色墙壁上。相比之下,这些传感器是嘈杂的,并且通常在立体声卓越的纹理场景上执行良好。我们介绍一种将两种方法的结果组合的方法,这些方法比单独执行更好。计算来自每种方法的深度概率分布函数,然后合并。此外,立体声方法具有长期使用的全局方法,如信仰传播和图形切割,以改善结果,并将这些方法应用于该传感器。由于飞行时间设备主要被用作单独的传感器,因此它们通常校准很差。我们介绍了一种基本上提高了制造商校准的方法。我们表明这些技术导致提高准确性和鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号