首页> 外文期刊>IEEE Transactions on Aerospace and Electronic Systems >Neural Network-Based Pose Estimation for Noncooperative Spacecraft Rendezvous
【24h】

Neural Network-Based Pose Estimation for Noncooperative Spacecraft Rendezvous

机译:基于神经网络的非自由化航天器的姿势估计

获取原文
获取原文并翻译 | 示例
           

摘要

This article presents the Spacecraft Pose Network (SPN), the first neural network-based method for on-board estimation of the pose, i.e., the relative position and attitude, of a known noncooperative spacecraft using monocular vision. In contrast to other state-of-the-art pose estimation approaches for spaceborne applications, the SPN method does not require the formulation of hand-engineered features and only requires a single grayscale image to determine the pose of the spacecraft relative to the camera. The SPN method uses a convolutional neural network (CNN) with three branches to solve the problem of relative attitude estimation. The first branch of the CNN bootstraps a state-of-the-art object detection algorithm to detect a 2-D bounding box around the target spacecraft in the input image. The region inside the 2-D bounding box is then used by the other two branches of the CNN to determine the relative attitude by initially classifying the input region into discrete coarse attitude labels before regressing to a finer estimate. The SPN method then estimates the relative position by using the constraints imposed by the detected 2-D bounding box and the estimated relative attitude. Further, with the detection of 2-D bounding boxes of subcomponents of the target spacecraft, the SPN method is easily generalizable to estimate the pose of multiple target geometries. Finally, to facilitate integration with navigation filters and perform continuous pose tracking, the SPN method estimates the uncertainty associated with the estimated pose. The secondary contribution of this article is the generation of the Spacecraft PosE Estimation Dataset (SPEED), which is used to train and evaluate the performance of the SPN method. SPEED consists of synthetic as well as actual camera images of a mock-up of the Tango spacecraft from the PRISMA mission. The synthetic images are created by fusing OpenGL-based renderings of the spacecraft's 3-D model with actual images of the Earth captured by the Himawari-8 meteorological satellite. The actual camera images are created using a seven degrees-of-freedom robotic arm, which positions and orients a vision-based sensor with respect to a full-scale mock-up of the Tango spacecraft with submillimeter and submillidegree accuracy. The SPN method, trained only on synthetic images, produces degree-level relative attitude error and cm-level relative position errors when evaluated on the actual camera images with a different distribution not used during training.
机译:本文介绍了航天器姿势网络(SPN),该基于第一神经网络的基于车载估计的方法,即使用单眼视觉的已知的非旋转空间的姿势,即相对位置和姿态。与空间载入应用的其他最先进的姿势估计接近相比,SPN方法不需要制定手工制造特征,并且仅需要单个灰度图像来确定航天器相对于相机的姿势。 SPN方法使用具有三个分支的卷积神经网络(CNN)来解决相对姿态估计的问题。 CNN引导的第一分支是最先进的对象检测算法,以检测输入图像中的目标航天器周围的2-D边界框。然后,CNN的其他两个分支内部使用2-D边界盒内的区域,以通过最初将输入区域分类为离散的粗姿态标签,以便在回归到更精细的估计之前来确定相对姿态。然后,SPN方法通过使用由检测到的2-D边界框和估计的相对姿态施加的约束来估计相对位置。此外,通过检测目标航天器的子组件的2-D边界盒,SPN方法易于推广以估计多个目标几何形状的姿势。最后,为了促进与导航滤波器的集成并执行连续姿势跟踪,SPN方法估计与估计的姿势相关的不确定性。本文的二级贡献是发电航天器姿势估计数据集(速度),用于培训和评估SPN方法的性能。速度由合成以及来自Prisma Mission的探戈宇宙飞船的模型的实际相机图像组成。通过用Himawari-8气象卫星捕获的地球的实际图像融合Spacecraft的3-D模型的基于OpenGL的渲染来创建合成图像。使用七个自由度机器人臂创建实际的相机图像,该机器人的位置和或者是基于视觉的传感器,相对于探戈航天器的全级模型,具有淹没倍数倍数,并且潜艇的精度。仅在合成图像上训练的SPN方法,在具有在训练期间不使用的不同分布的实际相机图像上评估时,产生程度级相对姿态误差和CM级相对位置误差。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号