首页> 外文会议>International Conference on Unmanned Aircraft Systems >RCPNet: Deep-Learning based Relative Camera Pose Estimation for UAVs
【24h】

RCPNet: Deep-Learning based Relative Camera Pose Estimation for UAVs

机译:RCPNet:基于深度学习的无人机相对相机姿态估计

获取原文

摘要

In this paper, we propose a deep neural-network based regression approach, combined with a 3D structure based computer vision method, to solve the relative camera pose estimation problem for autonomous navigation of UAVs. Different from existing learning-based methods that train and test camera pose estimation in the same scene, our method succeeds in estimating relative camera poses across various urban scenes via a single trained model. We also built a Tuebingen Buildings database of RGB images collected by a drone in eight urban scenes. Over 10,000 images with corresponding 6DoF poses as well as 300,000 image pairs with their relative translational and rotational information are included in the dataset. We evaluate the accuracy of our method in the same scene and across scenes, using the Cambridge Landmarks dataset and the Tuebingen Buildings dataset. We compare the performance with existing learning-based pose regression methods PoseNet and RPNet on these two benchmark datasets.
机译:在本文中,我们提出了一种基于深度神经网络的回归方法,并结合了基于3D结构的计算机视觉方法,以解决无人机自主导航的相对相机姿态估计问题。与在相同场景中训练和测试相机姿势估计的现有基于学习的方法不同,我们的方法通过单个训练模型成功地估计了各种城市场景中的相对相机姿势。我们还建立了Tuebingen Buildings数据库,其中包含一架无人机在八个城市场景中收集的RGB图像。数据集中包含超过10,000张具有相应6DoF姿势的图像以及300,000张具有相对平移和旋转信息的图像对。我们使用Cambridge Landmarks数据集和Tuebingen Buildings数据集在同一场景和跨场景中评估我们方法的准确性。我们将这两种基准数据集的性能与现有的基于学习的姿势回归方法PoseNet和RPNet进行比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号