The determination of one’s movement through the environment (visual odometry or self-motion estimation) from monocular sources such as video is an important research problem because of its relevance to robotics and autonomous vehicles. The traditional computer vision approach to this problem tracks visual features across frames in order to obtain 2-D image motion estimates from which the camera motion can be derived. We present an alternative scheme which uses the properties of motion sensitive cells in the primate brain to derive the image motion and the camera heading vector. We tested heading estimation using a camera mounted on a linear translation table with the line of sight of the camera set at a range of angles relative to straight ahead (0◦ to 50◦ in 10◦ steps). The camera velocity was also varied (0.2, 0.4, 0.8, 1.2, 1.6 and 2.0 m/s). Our biologically-based method produced accurate heading estimates over a wide range of test angles and camera speeds. Our approach has the advantage of being a one-shot estimator and not requiring iterative search techniques for finding the heading.
展开▼
机译:从诸如视频的单眼来源确定一个人在环境中的运动(视觉里程表或自我运动估计)是一个重要的研究问题,因为它与机器人技术和自动驾驶汽车有关。针对该问题的传统计算机视觉方法跟踪跨帧的视觉特征,以获得可从中导出相机运动的二维图像运动估计。我们提出了一种替代方案,该方案使用灵长类动物大脑中的运动敏感细胞的属性来推导图像运动和相机航向矢量。我们使用安装在线性平移台上的摄像机测试了航向估计,并将摄像机的视线设置为相对于笔直向前的角度范围(0º至50º,以10º为步长)。相机速度也有所不同(0.2、0.4、0.8、1.2、1.6和2.0 m / s)。我们基于生物学的方法可在各种测试角度和相机速度范围内产生准确的航向估计。我们的方法的优点是可以一次性估算,而无需使用迭代搜索技术来查找航向。
展开▼