首页> 外文会议>ASME international mechanical engineering congress and exposition >VISION-BASED TRAJECTORY TRACKING APPROACH FOR MOBILE PLATFORMS IN 3D WORLD USING 2D IMAGE SPACE
【24h】

VISION-BASED TRAJECTORY TRACKING APPROACH FOR MOBILE PLATFORMS IN 3D WORLD USING 2D IMAGE SPACE

机译:使用2D图像空间的基于视觉的3D世界移动平台轨迹跟踪方法

获取原文

摘要

Vision-based target following using a camera system mounted on a mobile platform has been a challenging problem. A scenario is assumed in which the platform/camera system is required to have a desired trajectory for relative position and orientation (pose) with respect to a target object. It is assumed that the actual pose of mobile platform with respect to the target is not measured by a Global Positioning System (GPS) and/or an Inertial Measurement Unit (IMU). For trajectory tracking feedback control, the error in the relative pose of the mobile platform with respect to the target needs to be computed. In the absence of GPS/IMU signals, the error in relative pose must be calculated using a vision-based approach. In this paper, we introduce a fast alternative vision-based approach for real-time calculation of the error in the relative pose of the mobile platform and the target. The proposed vision-based tracking approach is called PIVOT: Positioning and Orienting Using Vision-Based Object Tracking. The PIVOT system calculates the pose errors of the mobile platform in the 3D world based on a 2D image space. The only information required is "the desired 3D pose" and the coordinates of selected feature points on the target in order to track properly. The PIVOT forms a desired target image, compares it with the current target image and outputs the required 3D translation and rotation of the platform/camera to correct the image error. The required 3D translation and rotation of the platform/camera to correct the image error are fed to a feedback controller to drive the mobile platform in the direction that corrects the image error. When the image error is vanished, the mobile platform is moving on its desired trajectory. We have performed a set of experiments with the proposed PIVOT approach to show the effectiveness of the theoretical framework. According to the simulation results, PIVOT provides accurate pose errors for all test cases. The formulation of the approach is general, such that it can be applied to mobile platforms that move in 3D as well as 2D. Our first simulated and experimental tests will be on a mobile robot.
机译:使用安装在移动平台上的相机系统后的基于视觉的目标是一个具有挑战性的问题。假设一种场景,其中平台/相机系统需要具有相对于目标对象的相对位置和方向(姿势)所需的轨迹。假设不通过全球定位系统(GPS)和/或惯性测量单元(IMU)来测量移动平台的实际姿势。对于轨迹跟踪反馈控制,需要计算移动平台的相对姿势的误差。需要计算移动平台的相对姿势。在没有GPS / IMU信号的情况下,必须使用基于视觉的方法来计算相对姿势的误差。在本文中,我们介绍了一种快速替代的视觉基于视觉的方法,用于移动平台和目标的相对姿势中的误差实时计算。所提出的基于视觉的跟踪方法称为枢轴:使用基于视觉的对象跟踪定位和定向。枢轴系统基于2D图像空间计算3D世界中移动平台的姿势误差。所需的唯一信息是“期望的3D姿势”和目标上所选特征点的坐标,以便正确跟踪。枢轴形成所需的目标图像,将其与当前目标图像进行比较,并输出平台/相机的所需的3D平移和旋转以校正图像错误。平台/相机的所需的3D翻译和旋转将图像错误筛选到反馈控制器以在校正图像错误的方向上驱动移动平台。当图像错误消失时,移动平台正在其所需的轨迹上移动。我们已经进行了一组实验,以拟议的枢轴方法展示了理论框架的有效性。根据仿真结果,枢轴为所有测试用例提供准确的姿势误差。该方法的制定是一般的,使得它可以应用于以3D和2D移动的移动平台。我们的第一个模拟和实验测试将在移动机器人上。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号