首页> 外文会议>Position Location and Navigation Symposium (PLANS), 2012 IEEE/ION >Inertial and imaging sensor fusion for image-aided navigation with affine distortion prediction
【24h】

Inertial and imaging sensor fusion for image-aided navigation with affine distortion prediction

机译:惯性和成像传感器融合,用于仿射失真预测的图像辅助导航

获取原文
获取原文并翻译 | 示例

摘要

The Air Force Institute of Technology's Advanced Navigation Technology center has invested significant research time and effort into alternative precision navigation methods in an effort to counteract the increasing dependency on Global Positioning System for precision navigation. The use of visual sensors has since emerged as a valuable and feasible precision navigation alternative which, when coupled with inertial navigation sensors, can reduce navigation estimation errors by approximately two orders of magnitude [1] when compared to inertial-only solutions. A key component of many image-aided navigation algorithms is the requirement to detect and track salient features over many frames of an image sequence. However, feature matching accuracy is drastically reduced when the image sets differ in 3-D pose due to the affine distortions induced on feature descriptors [2]. In this research, this is counteracted by digitally simulating affine distortions on input images in order to calculate more accurate feature descriptors, which provide improved matching across large changes in viewpoint. These techniques are experimentally demonstrated in an outdoor environment with a consumer-grade inertial sensor and three imaging sensors, one of which is orthogonal to the others. False matches generated by using the orthogonal camera are shown to degrade the navigation solution if change in 3-D pose is not accounted for. Using a tactical-grade inertial sensor coupled with GPS position data as the truth source, the improved image-aided navigation algorithm, which accounts for changes in 3-D pose, is shown to reduce navigation errors by 24% in position, 16% in velocity, and 35% in attitude compared to the standard two-camera image-aided navigation setup.
机译:空军技术学院的高级导航技术中心已在替代性精确导航方法上投入了大量时间和精力,以抵消对精确导航全球定位系统的日益依赖。此后,视觉传感器的使用已成为一种有价值且可行的精确导航替代方案,当与惯性导航传感器结合使用时,与仅采用惯性的解决方案相比,可以将导航估计误差减少大约两个数量级[1]。许多图像辅助导航算法的关键组成部分是要求在图像序列的许多帧上检测和跟踪显着特征。但是,当图像集在3D姿势中由于在特征描述符上引起的仿射失真而有所不同时,特征匹配精度会大大降低[2]。在这项研究中,这是通过对输入图像上的仿射失真进行数字模拟来抵消的,以便计算出更准确的特征描述符,从而在较大的视点变化中提供更好的匹配。这些技术在室外环境中用消费级惯性传感器和三个成像传感器进行了实验验证,其中一个与其他传感器正交。如果不考虑3D姿势的变化,则显示使用正交相机生成的错误匹配会降低导航解决方案。结合战术位置惯性传感器和GPS位置数据作为真实来源,改进的图像辅助导航算法可解决3-D姿态的变化,可将导航误差降低24%,将导航误差降低16%。与标准的两机图像辅助导航设置相比,航速和姿态的35%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号