首页> 外文会议>IEEE/RSJ International Workshop on Intelligent Robots and Systems >Multi-modal human robot interaction for map generation
【24h】

Multi-modal human robot interaction for map generation

机译:地图生成的多模态人体机器人交互

获取原文

摘要

Describes an interface for multi modal human robot interaction, which enables people to introduce a newcomer robot to different attributes of objects and places in the room through speech commands and hand gestures. The robot makes an environment map of the room based on knowledge learned through communication with human and uses this map for navigation. The developed system consists of several sections including: natural language processing, posture recognition, object localization and map generation. This system uses a combination of multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to go near to it or locate that object's position in the room. The position of objects in the room is located by a monocular camera vision and depth from focus method.
机译:描述了多模态人体机器人交互的界面,这使人们能够通过语音命令和手势将新人机器人引入对象的不同属性和房间中的地方。该机器人基于通过与人类的通信学习的知识来制作房间的环境图,并使用此地图进行导航。开发系统包括几个部分,包括:自然语言处理,姿势识别,对象本地化和地图生成。该系统使用多个信息源和模型匹配的组合来检测和跟踪人手,以便用户可以指向感兴趣的对象,并指导机器人靠近它或定位在房间内的位置或定位该物体的位置。房间内物体的位置由单眼相机视觉和焦点法的深度定位。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号