首页> 外文会议>IEEE International Symposium on Robot and Human Interactive Communication >What's “up”? — Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant
【24h】

What's “up”? — Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant

机译:这是怎么回事”? - 通过针对机器人敷料助手的非视觉提示解决互动模糊性

获取原文

摘要

Robots that can assist in activities of daily living (ADL) such as dressing assistance, need to be capable of intuitive and safe interaction. Vision systems are often used to provide information on the position and movement of the robot and user. However, in a dressing context, technical complexity, occlusion and concerns over user privacy pushes research to investigate other approaches for human-robot interaction (HRI). We analysed verbal, proprioceptive and force feedback from 18 participants during a human-human dressing experiment where users received dressing assistance from a researcher mimicking robot behaviour. This paper investigates the occurrence of deictic speech in an assisted-dressing task and how any ambiguity could be resolved to ensure safe and reliable HRI. We focus on one of the most frequently occurring deictic words “up”, which was captured over 300 times during the experiments and is used as an example of an ambiguous command. We attempt to resolve the ambiguity of these commands through predictive models. These models were used to predict end effector choice and the direction in which the garment should move. The model for predicting end effector choice resulted in 70.4% accuracy based on the user's head orientation. For predicting garment direction, the model used the angle of the user's arm and resulted in 87.8% accuracy. We also found that additional categories such as the starting position of the user's arms and end-effector height may improve the accuracy of a predictive model. We present suggestions on how these inputs may be attained through non-visual means, for example through haptic perception of end-effector position, proximity sensors and acoustic source localisation.
机译:可以帮助诸如敷料援助等日常生活(ADL)活动的机器人需要能够直观和安全的互动。视觉系统通常用于提供有关机器人和用户的位置和移动的信息。然而,在敷料背景下,技术复杂性,遮挡和对用户隐私的担忧推动了研究,以研究人体机器人互动(HRI)的其他方法。在人类敷料实验期间,我们分析了来自18名参与者的口头,预言和力量反馈,用户获得了从研究人员模仿机器人行为的研究员的辅助。本文调查了辅助敷料任务中的解释演讲以及如何解决任何歧义,以确保安全可靠的HRI。我们专注于最常见的示例“UP”之一,在实验期间被捕获300次,并用作模糊指挥的示例。我们试图通过预测模型解决这些命令的歧义。这些模型用于预测末端效应器选择和衣服应该移动的方向。预测末端效应器选择的模型基于用户的头向方向导致70.4 %的精度。为了预测服装方向,模型使用了用户臂的角度,并导致了87.8%的精度。我们还发现,诸如用户臂和末端效应器高度的起始位置的附加类别可以提高预测模型的准确性。我们提出了关于如何通过非视觉方式实现这些输入的建议,例如通过对末端效应位置,接近传感器和声学源定位的触觉感知来实现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号