首页> 外文会议>International Conference on Robotics and Automation >Anytime Stereo Image Depth Estimation on Mobile Devices
【24h】

Anytime Stereo Image Depth Estimation on Mobile Devices

机译:随时随地估算移动设备上的立体图像深度

获取原文

摘要

Many applications of stereo depth estimation in robotics require the generation of accurate disparity maps in real time under significant computational constraints. Current state-of-the-art algorithms force a choice between either generating accurate mappings at a slow pace, or quickly generating inaccurate ones, and additionally these methods typically require far too many parameters to be usable on power- or memory-constrained devices. Motivated by these shortcomings, we propose a novel approach for disparity prediction in the anytime setting. In contrast to prior work, our end-to-end learned approach can trade off computation and accuracy at inference time. Depth estimation is performed in stages, during which the model can be queried at any time to output its current best estimate. Our final model can process 1242×375 resolution images within a range of 10-35 FPS on an NVIDIA Jetson TX2 module with only marginal increases in error - using two orders of magnitude fewer parameters than the most competitive baseline. The source code is available at https://github.com/mileyan/AnyNet.
机译:立体深度估计在机器人技术中的许多应用都需要在明显的计算约束下实时生成准确的视差图。当前的最新算法迫使用户在缓慢生成准确映射或快速生成不正确映射之间做出选择,此外,这些方法通常需要太多参数才能在功耗或内存受限的设备上使用。由于这些缺点,我们提出了一种随时随地进行视差预测的新颖方法。与先前的工作相比,我们的端到端学习方法可以在推理时权衡计算和准确性。深度估算是分阶段执行的,在此期间可以随时查询模型以输出其当前最佳估算值。我们的最终模型可以在NVIDIA Jetson TX2模块上以10-35 FPS的范围处理1242×375分辨率的图像,而误差只有少量增加-使用的参数比最具竞争力的基准少两个数量级。可从https://github.com/mileyan/AnyNet获得源代码。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号