首页> 外文期刊>Computer vision and image understanding >A review and evaluation of methods estimating ego-motion
【24h】

A review and evaluation of methods estimating ego-motion

机译:自我运动估计方法的回顾与评估

获取原文
获取原文并翻译 | 示例

摘要

If a visual observer moves through an environment, the patterns of light that impinge its retina vary leading to changes in sensed brightness. Spatial shifts of brightness patterns in the 2D image over time are called optic flow. In contrast to optic flow visual motion fields denote the displacement of 3D scene points projected onto the camera's sensor surface. For translational and rotational movement through a rigid scene parametric models of visual motion fields have been defined. Besides ego-motion these models provide access to relative depth, and both ego-motion and depth information is useful for visual navigation. In the past 30 years methods for ego-motion estimation based on models of visual motion fields have been developed. In this review we identify five core optimization constraints which are used by 13 methods together with different optimization techniques.1 In the literature methods for ego-motion estimation typically have been evaluated by using an error measure which tests only a specific ego-motion. Furthermore, most simulation studies used only a Gaussian noise modet Unlike, we test multiple types and instances of ego-motion. One type is a fixating ego-motion, another type is a curve-linear ego-motion. Based on simulations we study properties like statistical bias, consistency, variability of depths, and the robustness of the methods with respect to a Gaussian or outlier noise model. In order to achieve an improvement of estimates for noisy visual motion fields, part of the 13 methods are combined with techniques for robust estimation like m-functions or RANSAC. Furthermore, a realistic scenario of a stereo image sequence has been generated and used to evaluate methods of ego-motion estimation provided by estimated optic flow and depth information.
机译:如果视觉观察者在环境中移动,则撞击其视网膜的光会发生变化,从而导致感知到的亮度发生变化。 2D图像中亮度模式随时间的空间偏移称为光流。与光流相反,视觉运动场表示投影到摄像机传感器表面的3D场景点的位移。对于通过刚性场景的平移和旋转运动,已经定义了视觉运动场的参数模型。除了自我运动,这些模型还提供了相对深度的访问权限,自我运动和深度信息对于视觉导航很有用。在过去的30年中,已经开发了基于视觉运动场模型的自我运动估计方法。在这篇综述中,我们确定了五个核心优化约束,这些约束被13种方法以及不同的优化技术所使用。1在文献中,自我运动估计方法通常是通过使用仅测试特定自我运动的误差度量进行评估的。此外,大多数模拟研究仅使用高斯噪声模型。与之不同,我们测试了自我运动的多种类型和实例。一种是固定的自我运动,另一种是曲线线性自我运动。基于模拟,我们研究了诸如统计偏差,一致性,深度可变性以及这些方法相对于高斯或离群噪声模型的鲁棒性之类的属性。为了改善对嘈杂的视觉运动场的估计,这13种方法中的一部分与诸如m函数或RANSAC的鲁棒估计技术相结合。此外,已经生成了立体图像序列的真实场景,并将其用于评估由估计的光流和深度信息提供的自我运动估计的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号