首页> 外文会议>IFAC/IFIP/IEEE conference on management and control of production and logistics;MCPL 2000 >Visual servoing of robotic manipulator in a virtual learned articular space
【24h】

Visual servoing of robotic manipulator in a virtual learned articular space

机译:虚拟学习关节空间中机械手的视觉伺服

获取原文

摘要

A position control approach for Robotic Manipulator based on visual feedback is presented. This feedback, originated from a pair stereo fixed camera, is converted into a Virtual Articular Space, so-called because it is an approximation of real articular space of robot. A multilayered neural network trained to build the correspondence from the visual information of the robot hand and the articular position of the arm generates this approximation. By using this mapping we avoid the complexity of the analytic approach which requires the both robot inverse kinematics and the inverse camera-space mappings, including calibration. This approach is tested experimentally in real time on a 5 degrees-of-freedom laboratory manipulator, including the required cameras and image processing boards.
机译:提出了一种基于视觉反馈的机器人操纵器位置控制方法。这种源自一对立体声固定摄像机的反馈被转换为虚拟关节空间,之所以称为虚拟关节空间,是因为它是机器人真实关节空间的近似值。经过训练的多层神经网络可以根据机器人手的视觉信息与手臂的关节位置建立对应关系,从而产生这种近似值。通过使用此映射,我们避免了解析方法的复杂性,该方法既需要机器人逆运动学,也需要逆相机空间映射(包括校准)。该方法已在5自由度实验室操纵器上实时进行了实验测试,该操纵器包括所需的照相机和图像处理板。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号