首页> 外文期刊>Journal of Field Robotics >Cooperative Relative Navigation for Space Rendezvous and Proximity Operations using Controlled Active Vision
【24h】

Cooperative Relative Navigation for Space Rendezvous and Proximity Operations using Controlled Active Vision

机译:使用可控主动视觉的空间相交和近距作战协同相对导航

获取原文
获取原文并翻译 | 示例
           

摘要

This work aims to solve the problem of relative navigation for space rendezvous and proximity operations using a monocular camera in a numerically efficient manner. It is assumed that the target spacecraft has a special pattern to aid the task of relative pose estimation, and that the chaser spacecraft uses a monocular camera as the primary visual sensor. In this sense, the problem falls under the category of cooperative relative navigation in orbit. While existing systems for cooperative localization with fiducial markers allow full six-degrees-of-freedom pose estimation, the majority of them are not suitable for in-space cooperative navigation (especially when involving a small-size chaser spacecraft), due to their computational cost. Moreover, most existing fiducial-based localization methods are designed for ground-based applications with limited range (e.g., ground robotics, augmented reality), and their performance deteriorates under large-scale changes, such as those encountered in space applications. Using an adaptive visual algorithm, we propose an accurate and numerically efficient approach for real-time vision-based relative navigation, especially designed for space robotics applications. The proposed method achieves low computational cost and high accuracy and robustness via the following innovations: first, an adaptive visual pattern detection scheme based on the estimated relative pose is proposed, which improves both the efficiency of detection and the accuracy of pose estimates; second, a parametric blob detector called Box-LoG is used, which is computationally efficient; and third, a fast and robust algorithm is introduced, which jointly solves the data association and pose estimation problems. In addition to having an accuracy comparable to state-of-the-art cooperative localization algorithms, our method demonstrates a significant improvement in speed and robustness for scenarios with large range changes. A vision-based closed-loop experiment using the Autonomous Spacecraft Testing of Robotic Operations in Space (ASTROS) testbed demonstrates the performance benefits of the proposed approach.
机译:这项工作旨在以数字有效的方式解决使用单眼相机进行空间会合和接近操作的相对导航问题。假定目标航天器具有特殊的图案以辅助相对姿态估计,并且追赶航天器使用单眼相机作为主要的视觉传感器。从这个意义上讲,问题属于轨道上的相对合作导航类别。尽管现有的带有基准标记的协作式定位系统可以进行完整的六自由度姿态估计,但是由于它们的计算能力,它们中的大多数都不适合于太空协作式导航(尤其是在涉及小型追赶航天器时)成本。此外,大多数现有的基于基准的定位方法是为范围有限的地面应用(例如,地面机器人,增强现实)而设计的,并且其性能在大规模变化下会恶化,例如在空间应用中遇到的那些。通过使用自适应视觉算法,我们为基于视觉的实时相对导航提出了一种精确且数值高效的方法,特别是针对太空机器人应用而设计的。该方法通过以下创新实现了低计算量,高精度和鲁棒性:首先,提出了一种基于估计相对姿态的自适应视觉模式检测方案,既提高了检测效率,又提高了姿态估计的准确性。其次,使用称为Box-LoG的参数化斑点检测器,该方法在计算上是有效的。第三,提出了一种快速,鲁棒的算法,共同解决了数据关联和姿态估计问题。除了具有可与最新的协作定位算法相媲美的精度外,我们的方法还证明了在范围变化较大的情况下,速度和鲁棒性得到了显着提高。基于视觉的闭环实验,使用太空机器人操作的自主航天器测试(ASTROS)测试台,证明了该方法的性能优势。

著录项

  • 来源
    《Journal of Field Robotics》 |2016年第2期|205-228|共24页
  • 作者单位

    School of Electrical & Computer Engineering, Institute for Robotics & Intelligent Machines, Georgia Institute of Technology, Atlanta, Georgia 30332;

    School of Aerospace Engineering, Institute for Robotics & Intelligent Machines, Georgia Institute of Technology, Atlanta, Georgia 30332;

    School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332;

    School of Aerospace Engineering, Institute for Robotics & Intelligent Machines, Georgia Institute of Technology, Atlanta, Georgia 30332;

    School of Electrical & Computer Engineering, Institute for Robotics & Intelligent Machines, Georgia Institute of Technology, Atlanta, Georgia 30332;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号