首页> 外文会议>IEEE/RSJ International Conference on Intelligent Robots and Systems;IROS 2012 >Full scaled 3D visual odometry from a single wearable omnidirectional camera
【24h】

Full scaled 3D visual odometry from a single wearable omnidirectional camera

机译:单个可穿戴式全向摄像机的全比例3D视觉里程表

获取原文
获取原文并翻译 | 示例

摘要

In the last years monocular SLAM has been widely used to obtain highly accurate maps and trajectory estimations of a moving camera. However, one of the issues of this approach is that, due to the impossibility of the depth being measured in a single image, global scale is not observable and scene and camera motion can only be recovered up to scale. This problem gets aggravated as we deal with larger scenes since it is more likely that scale drift arises between different map portions and their corresponding motion estimates. To compute the absolute scale we need to know some kind of dimension of the scene (e.g., actual size of an element of the scene, velocity of the camera or baseline between two frames) and somehow integrate it in the SLAM estimation. In this paper, we present a method to recover the scale of the scene using an omnidirectional camera mounted on a helmet. The high precision of visual SLAM allows the head vertical oscillation during walking to be perceived in the trajectory estimation. By performing a spectral analysis on the camera vertical displacement, we can measure the step frequency. We relate the step frequency to the speed of the camera by an empirical formula based on biomedical experiments on human walking. This speed measurement is integrated in a particle filter to estimate the current scale factor and the 3D motion estimation with its true scale. We evaluated our approach using image sequences acquired while a person walks. Our experiments show that the proposed approach is able to cope with scale drift.
机译:近年来,单眼SLAM已被广泛用于获取运动相机的高精度地图和轨迹估计。但是,这种方法的问题之一是,由于不可能在单个图像中测量深度,因此无法观察到全局比例,并且场景和相机运动只能按比例恢复。当我们处理较大的场景时,此问题会变得更加严重,因为在不同的地图部分及其对应的运动估计之间更可能出现比例漂移。为了计算绝对比例,我们需要知道场景的某种尺寸(例如,场景元素的实际大小,摄像机的速度或两帧之间的基线),并以某种方式将其整合到SLAM估计中。在本文中,我们提出了一种使用安装在头盔上的全向摄像机恢复场景比例的方法。视觉SLAM的高精度允许在轨迹估计中感知行走过程中头部的垂直振动。通过对摄像机的垂直位移进行频谱分析,我们可以测量步进频率。我们通过基于人体行走生物医学实验的经验公式,将步进频率与相机的速度相关联。将该速度测量值集成到粒子滤波器中,以估计当前比例因子和具有真实比例的3D运动估计。我们使用一个人走路时获得的图像序列评估了我们的方法。我们的实验表明,所提出的方法能够应对规模漂移。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号