首页> 外文学位 >Multi-view systems for robot-human feedback and object grasping.
【24h】

Multi-view systems for robot-human feedback and object grasping.

机译:用于机器人人为反馈和物体抓握的多视图系统。

获取原文
获取原文并翻译 | 示例

摘要

In human-robot interaction, computer vision technologies are widely used by robotic systems to analyze the target scene and interpret human commands. Commands to robotic systems can be provided by human input through natural interfaces, such as body gestures, hand motion, etc. Although interactions through natural interfaces are intuitive, the communication may or may not work depending on the operating environment and the robustness of the vision algorithms used. Therefore, some feedback from the robots to the human is necessary to show how the robots understand the human input or the desired behavior. Additionally, feedback from the robots allows human users to respond accordingly to enable further interactions. In this work, we investigate the use of multi-view vision systems that provide such interactive visual feedback to improve human-robot interaction for a specific type of robotic application, namely object grasping.;Two multi-view systems using different visual sensors are developed. The proposed systems detect objects that attract the attention of human observers using a visual attention model, since such objects are likely to be the desired grasping targets. The systems project visual feedbacks for the detected objects using a DLP projector. The projected patterns indicate the possible grasping targets of the system, and also define the commands it can accept. Human operators choose the grasping target by giving a confirmative response according to the projected patterns.;One system consists of a stereo camera and a projector. The trifocal tensor is used to match detected objects and determine feedback patterns. The other system replaces the stereo camera with a RGB-D camera. The 3D structure of the target scene is constructed to decide projected patterns. Both systems allow users to select a grasping target among the detected objects by interacting through the feedback patterns. Then the vision systems can guide a robotic arm to grasp the selected object.;Object grasping is done using the Simultaneous Image/Position Visual Servoing method. An automatic goal pose/image generation method for visual servoing with respect to a projected pattern is proposed. Experimental results are presented to demonstrate how the two systems are used for human-robot-interaction in robot grasping applications.
机译:在人机交互中,机器人系统广泛使用计算机视觉技术来分析目标场景并解释人的命令。机器人系统的命令可以由人类通过自然界面(例如身体手势,手部动作等)输入来提供。尽管通过自然界面进行的交互是直观的,但根据操作环境和视觉的健壮性,通信可能有效也可能无效使用的算法。因此,从机器人到人的一些反馈是必要的,以显示机器人如何理解人的输入或期望的行为。此外,来自机器人的反馈使人类用户可以做出相应响应,以实现进一步的交互。在这项工作中,我们调查了多视图视觉系统的使用,该系统提供了这样的交互式视觉反馈,以改善特定类型的机器人应用(即对象抓取)的人机交互。;开发了两个使用不同视觉传感器的多视图系统。所提出的系统使用视觉注意力模型来检测吸引人类观察者注意力的物体,因为这些物体很可能是期望的抓握目标。系统使用DLP投影仪投影检测到的物体的视觉反馈。投影的图案指示了系统可能的抓取目标,并且还定义了系统可以接受的命令。操作人员根据投影的图案给出肯定的响应,选择要抓住的目标。一个系统由立体声相机和投影仪组成。三焦点张量用于匹配检测到的对象并确定反馈模式。另一个系统将立体声摄像机替换为RGB-D摄像机。构建目标场景的3D结构以决定投影模式。这两种系统都允许用户通过反馈模式进行交互,从而在检测到的物体中选择一个抓握目标。然后,视觉系统可以引导机械臂抓取选定的对象。使用同步图像/位置视觉伺服方法完成对象抓取。提出了一种针对投影图案进行视觉伺服的自动目标姿态/图像生成方法。实验结果表明,这两种系统如何在机器人抓取应用中用于人机交互。

著录项

  • 作者

    Shen, Jinglin.;

  • 作者单位

    The University of Texas at Dallas.;

  • 授予单位 The University of Texas at Dallas.;
  • 学科 Engineering Robotics.
  • 学位 Ph.D.
  • 年度 2013
  • 页码 122 p.
  • 总页数 122
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 康复医学;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号