...
首页> 外文期刊>Cognitive Systems Research >Object affordance based multimodal fusion for natural Human-Robot interaction
【24h】

Object affordance based multimodal fusion for natural Human-Robot interaction

机译:基于对象能力的多模式融合,实现自然的人机交互

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Spoken language based natural Human-Robot Interaction (HRI) requires robots to have the ability to understand spoken language, and extract the intention-related information from the working scenario. For grounding the intention-related object in the working environment, object affordance recognition could be a feasible way. To this end, we propose a dataset and a deep CNN based architecture to learn the human-centered object affordance. Furthermore, we present an affordance based multimodal fusion framework to realize intended object grasping according to the spoken instructions of human users. The proposed framework contains an intention semantics extraction module which is employed to extract the intention from spoken language, a deep Convolutional Neural Network (CNN) based object affordance recognition module which is applied to recognize human-centered object affordance, and a multimodal fusion module which is adopted to bridge the extracted intentions and the recognized object affordances. We also complete multiple intended object grasping experiments on a PR2 platform to validate the feasibility and practicability of the presented HRI framework. (C) 2018 Elsevier B.V. All rights reserved.
机译:基于口语的自然人机交互(HRI)要求机器人具有理解口语的能力,并从工作场景中提取与意图相关的信息。为了使意图相关的对象在工作环境中接地,对象提供能力识别可能是一种可行的方法。为此,我们提出了一个数据集和一个基于CNN的深度架构,以学习以人为中心的对象提供能力。此外,我们提出了一种基于能力的多模式融合框架,可以根据人类用户的口头指令来实现预期的对象抓取。所提出的框架包含意图语义提取模块(用于从口语中提取意图),基于深度卷积神经网络(CNN)的对象供应能力识别模块(用于识别以人为中心的对象供应能力)以及多模式融合模块(采用桥接提取的意图和识别对象的能力。我们还在PR2平台上完成了多个预期的对象抓取实验,以验证所提出的HRI框架的可行性和实用性。 (C)2018 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号