首页> 外文会议>IASTED International Conference on Human-Computer Interaction >MULTIMODAL INTERACTION FOR MOBILE ROBOT GUIDANCE
【24h】

MULTIMODAL INTERACTION FOR MOBILE ROBOT GUIDANCE

机译:移动机器人引导的多模式互动

获取原文

摘要

Human-robot interaction is a very important issue for autonomous robots, in particular when the targeted environment is more general than the strictly constrained, fully-controlled factory automation. Modern robots are basically computers equipped with complex actuators and sensing devices: thus their native communication paradigm can be described in terms of tokens and numeric data and performed through a keyboard and a monitor or similar devices. We introduce multimodal human-computer interface and robot guidance platform allowing a human user to communicate with a robot, associate visual tags and spoken tokens to objects and ask the robot to perform actions. To illustrate our work, we implemented the presented system on a mobile robot platform, making it autonomous and able to perform navigational and object recognition and manipulation tasks. Experimental results are shown and discussed.
机译:人体机器人互动是自主机器人的一个非常重要的问题,特别是当目标环境比严格限制的全控制的工厂自动化更广泛时。 现代机器人基本上是配备有复杂的执行器和传感设备的计算机:因此,它们可以根据令牌和数字数据描述它们的本机通信范例,并通过键盘和监视器或类似设备执行。 我们介绍多式联版人机界面和机器人指导平台,允许人类用户与机器人通信,将视觉标记和口头代币与对象相关联,并要求机器人执行动作。 为了说明我们的工作,我们在移动机器人平台上实现了所呈现的系统,使其自主且能够执行导航和对象识别和操纵任务。 显示和讨论了实验结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号