首页> 外文OA文献 >Autonomous vision-based terrain-relative navigation for planetary exploration
【2h】

Autonomous vision-based terrain-relative navigation for planetary exploration

机译:基于视觉的自主地形相对导航,用于行星探索

摘要

Abstract: The interest of major space agencies in the world for vision sensors in their mission designs has been increasing over the years. Indeed, cameras offer an efficient solution to address the ever-increasing requirements in performance. In addition, these sensors are multipurpose, lightweight, proven and a low-cost technology. Several researchers in vision sensing for space application currently focuse on the navigation system for autonomous pin-point planetary landing and for sample and return missions to small bodies. In fact, without a Global Positioning System (GPS) or radio beacon around celestial bodies, high-accuracy navigation around them is a complex task. Most of the navigation systems are based only on accurate initialization of the states and on the integration of the acceleration and the angular rate measurements from an Inertial Measurement Unit (IMU). This strategy can track very accurately sudden motions of short duration, but their estimate diverges in time and leads normally to high landing error. In order to improve navigation accuracy, many authors have proposed to fuse those IMU measurements with vision measurements using state estimators, such as Kalman filters. The first proposed vision-based navigation approach relies on feature tracking between sequences of images taken in real time during orbiting and/or landing operations. In that case, image features are image pixels that have a high probability of being recognized between images taken from different camera locations. By detecting and tracking these features through a sequence of images, the relative motion of the spacecraft can be determined. This technique, referred to as Terrain-Relative Relative Navigation (TRRN), relies on relatively simple, robust and well-developed image processing techniques. It allows the determination of the relative motion (velocity) of the spacecraft. Despite the fact that this technology has been demonstrated with space qualified hardware, its gain in accuracy remains limited since the spacecraft absolute position is not observable from the vision measurements. The vision-based navigation techniques currently studied consist in identifying features and in mapping them into an on-board cartographic database indexed by an absolute coordinate system, thereby providing absolute position determination. This technique, referred to as Terrain-Relative Absolute Navigation (TRAN), relies on very complex Image Processing Software (IPS) having an obvious lack of robustness. In fact, these software depend often on the spacecraft attitude and position, they are sensitive to illumination conditions (the elevation and azimuth of the Sun when the geo-referenced database is built must be similar to the ones present during mission), they are greatly influenced by the image noise and finally they hardly manage multiple varieties of terrain seen during the same mission (the spacecraft can fly over plain zone as well as mountainous regions, the images may contain old craters with noisy rims as well as young crater with clean rims and so on). At this moment, no real-time hardware-in-the-loop experiment has been conducted to demonstrate the applicability of this technology to space mission. The main objective of the current study is to develop autonomous vision-based navigation algorithms that provide absolute position and surface-relative velocity during the proximity operations of a planetary mission (orbiting phase and landing phase) using a combined approach of TRRN and TRAN technologies. The contributions of the study are: (1) reference mission definition, (2) advancements in the TRAN theory (image processing as well as state estimation) and (3) practical implementation of vision-based navigation.
机译:摘要:多年来,世界上主要的太空机构对视觉传感器的任务设计越来越感兴趣。确实,相机提供了一种有效的解决方案,可以满足不断增长的性能要求。此外,这些传感器是多功能,轻便,可靠且低成本的技术。目前,用于空间应用的视觉传感领域的几位研究人员专注于导航系统,以实现自动确定点的行星着陆以及对小型物体的采样和返回任务。实际上,在天体周围没有全球定位系统(GPS)或无线电信标的情况下,围绕天体的高精度导航是一项复杂的任务。大多数导航系统仅基于状态的准确初始化以及来自惯性测量单元(IMU)的加速度和角速度测量值的积分。这种策略可以非常精确地跟踪短时的突然运动,但其估计会随时间变化,并且通常会导致较高的着陆误差。为了提高导航精度,许多作者提出使用状态估计器(例如卡尔曼滤波器)将这些IMU测量值与视觉测量值融合。首先提出的基于视觉的导航方法依赖于在轨道和/或着陆操作期间实时拍摄的图像序列之间的特征跟踪。在那种情况下,图像特征是在从不同相机位置拍摄的图像之间被识别的可能性很高的图像像素。通过一系列图像检测和跟踪这些特征,可以确定航天器的相对运动。这项技术称为“相对地形相对导航(TRRN)”,它依赖于相对简单,健壮和完善的图像处理技术。它可以确定航天器的相对运动(速度)。尽管已通过太空专用硬件演示了该技术,但由于无法从视觉测量中观察到航天器的绝对位置,因此其准确性的提高仍然受到限制。当前研究的基于视觉的导航技术包括识别特征并将其映射到由绝对坐标系索引的车载地图数据库中,从而提供绝对位置确定。这项技术被称为Terrain-Relative Absolute Navigation(TRAN),它依赖于非常复杂的图像处理软件(IPS),而该软件明显缺乏鲁棒性。实际上,这些软件通常取决于航天器的姿态和位置,它们对光照条件敏感(建立地理参考数据库时,太阳的仰角和方位角必须与任务期间存在的相似),它们非常重要受图像噪声的影响,最终它们几乎无法管理在同一任务中看到的多种地形(航天器可以飞越平原地区和山区,图像可能包含带有嘈杂边缘的旧火山口以及带有干净边缘的年轻火山口等等)。目前,还没有进行实时的硬件在环实验以证明该技术在太空任务中的适用性。当前研究的主要目的是开发一种基于视觉的自主导航算法,该算法使用TRRN和TRAN技术的组合方法在行星任务的接近操作(轨道运行和着陆运行)中提供绝对位置和相对表面的速度。这项研究的贡献是:(1)参考任务定义,(2)TRAN理论(图像处理以及状态估计)方面的进步以及(3)基于视觉的导航的实际实现。

著录项

  • 作者

    Simard Bilodeau Vincent;

  • 作者单位
  • 年度 2015
  • 总页数
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号