【24h】

Towards visual ego-motion learning in robots

机译:走向机器人的视觉自我运动学习

获取原文

摘要

Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more experience. To this end, we propose a fully trainable solution to visual ego-motion estimation for varied camera optics. We propose a visual ego-motion learning architecture that maps observed optical flow vectors to an ego-motion density estimate via a Mixture Density Network (MDN). By modeling the architecture as a Conditional Variational Autoencoder (C-VAE), our model is able to provide introspective reasoning and prediction for ego-motion induced scene-flow. Additionally, our proposed model is especially amenable to bootstrapped ego-motion learning in robots where the supervision in ego-motion estimation for a particular camera sensor can be obtained from standard navigation-based sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through experiments, we show the utility of our proposed approach in enabling the concept of self-supervised learning for visual ego-motion estimation in autonomous robots.
机译:在过去的十年中,已经提出了许多基于模型的视觉里程表(VO)算法,这些算法通常局限于摄像头光学器件的类型或观察到的基本运动流形。我们设想,随着机器人获得更多的经验,他们将能够在最少监督的情况下学习和执行这些任务。为此,我们为各种相机光学器件的视觉自我运动估计提供了一种完全可训练的解决方案。我们提出了一种视觉自我运动学习体系结构,该体系结构通过混合密度网络(MDN)将观察到的光流矢量映射到自我运动密度估计值。通过将体系结构建模为条件变分自动编码器(C-VAE),我们的模型能够为自我运动引起的场景流提供内省性推理和预测。此外,我们提出的模型特别适合机器人的自举式学习,其中可以从基于导航的标准传感器融合策略(GPS / INS和车轮里程融合)中获得对特定摄像头传感器的自我运动估计的监督。 。通过实验,我们证明了我们提出的方法在实现自我监督学习的概念中的实用性,以实现自主机器人的视觉自我运动估计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号