首页> 外文期刊>Cybernetics, IEEE Transactions on >A Robust Vision-Based Sensor Fusion Approach for Real-Time Pose Estimation
【24h】

A Robust Vision-Based Sensor Fusion Approach for Real-Time Pose Estimation

机译:一种基于视觉的鲁棒传感器融合,用于实时姿态估计

获取原文
获取原文并翻译 | 示例
       

摘要

Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
机译:对象姿态估计对于许多应用至关重要,例如增强现实,定位和地图绘制,运动捕获以及视觉伺服。尽管已经提出了许多基于单眼相机的方法,但是只有少数工作集中在将多相机传感器融合技术应用于姿态估计上。这些方案的优点是具有更高的精度和对传感器缺陷或故障的鲁棒性。本文介绍了一种新的基于Kalman的姿态估计传感器融合方法,与以前的方法相比,该方法提供更高的精度和精度,并且对相机运动和图像遮挡具有鲁棒性。进行了广泛的实验,以验证这种融合方法相对于当前采用的基于视觉的姿势估计算法的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号