首页> 外文期刊>IEEE Transactions on Industrial Electronics >A 3-D Vision-Based Man-Machine Interface For Hand-Controlled Telerobot
【24h】

A 3-D Vision-Based Man-Machine Interface For Hand-Controlled Telerobot

机译:基于3D视觉的人机交互机器人界面

获取原文
获取原文并翻译 | 示例
           

摘要

This paper presents a robust telerobotic system that consists of a real-time vision-based operator hand tracking system (client) and a slave robot (server) which are interconnected through a LAN. The tracking system: 1) monitors the operator hand motion and 2) determines its position and orientation which are used to control the slave robot. Two digital cameras are used to monitor a four-ball-based feature frame that is held by the operator hand. To determine the three-dimensional (3-D) position a tracking algorithm based on uncalibrated cameras with weak perspective projection model is used. This allows finding 3-D differential position and orientation of the operator hand. The features of the proposed system are: 1) a metric for color matching to discriminate the balls from their background; 2) a uniform and spiral search approach to speed up the detection; 3) tracking in the presence of partial occlusion; 4) consolidate detection by using shape and geometric matching; and 5) dynamic update of the reference colors. The operator can see the effects of the previous motion which enables making the necessary corrections through repetitive operator hand-eye interactions. Evaluation shows that the static and dynamic errors of the tracking algorithm are 0.1% and 0.6% for a centered workspace of 203 in3 that is 40-60 in away from the cameras. Running the tracking algorithm on two PCs in parallel allowed: 1) a parallel image grabbing delay of 60 ms; 2) a stereo matching delay of 50 ms; and 3) a global refresh rate of 9 Hz.
机译:本文提出了一个健壮的远程机器人系统,该系统包括通过局域网互连的基于实时视觉的操作员手部跟踪系统(客户端)和从机机器人(服务器)。跟踪系统:1)监控操作员的手部动作,2)确定其位置和方向,以控制从属机器人。两个数码相机用于监视由操作员手握住的基于四球的功能框。为了确定三维(3-D)位置,使用了基于具有弱透视投影模型的未校准相机的跟踪算法。这允许找到操作者手的3-D微分位置和方向。所提出的系统的特征是:1)用于颜色匹配的度量,以将球与背景区别开来; 2)统一和螺旋搜索方法,以加快检测速度; 3)在部分遮挡的情况下进行跟踪; 4)通过形状和几何匹配来巩固检测; 5)动态更新参考颜色。操作员可以看到先前动作的效果,从而可以通过重复的操作员手眼互动来进行必要的校正。评估显示,对于203 in3(距相机40-60英寸)的居中工作空间,跟踪算法的静态和动态误差分别为0.1%和0.6%。允许在两台PC上并行运行跟踪算法:1)60 ms的并行图像捕获延迟; 2)立体声匹配延迟为50毫秒; 3)9 Hz的整体刷新率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号