...
首页> 外文期刊>Aerospace >Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations
【24h】

Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations

机译:基于视觉的航天器通过深度卷积神经网络进行非自由作对接操作的展开估计

获取原文
           

摘要

The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.
机译:追逐者的目标航天器捕获是一种轨道对接操作,需要准确,可靠和鲁棒的对象识别算法。在近距离操纵期间的基于视觉的引导航天器相对运动,使用动态建模作为航天器在轨道服务系统中连续应用。该研究构造了一种基于视觉的姿势估计模型,通过深卷积神经网络进行图像处理。通过将修改的预训练陀螺仪模型重新修改了与可用的虚幻发动机4呈现的DataSet来构建姿势估计模型。在实现中,卷积神经网络从数据样本中学习以在图像和航天器的六个自由度参数之间创建相关性。实验比较了基于指数的损失函数和基于加权的基于欧几里德的损失功能。利用基于加权的欧几里德 - 基于欧几里德的损耗功能,实现的姿势估计模型适度高的性能,位置精度为92.53%,误差为1.2米。态度预测精度可以达到87.93%,三个欧拉角中的误差不超过7.6度。该研究可以促进航天器检测和跟踪问题。虽然完成的基于视觉的模型是特定于合成数据集的环境,但是该模型可以进一步培训,以便将来解决实际的对接操作。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号