首页> 外文期刊>IEEE Transactions on Robotics >Data-Driven HRI: Learning Social Behaviors by Example From Human–Human Interaction
【24h】

Data-Driven HRI: Learning Social Behaviors by Example From Human–Human Interaction

机译:数据驱动的HRI:通过人与人互动示例学习社会行为

获取原文
获取原文并翻译 | 示例

摘要

Recent studies in human-robot interaction (HRI) have investigated ways to harness the power of the crowd for the purpose of creating robot interaction logic through games and teleoperation interfaces. Sensor networks capable of observing human-human interactions in the real world provide a potentially valuable and scalable source of interaction data that can be used for designing robot behavior. To that end, we present here a fully automated method for reproducing observed real-world social interactions with a robot. The proposed method includes techniques for characterizing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naïve Bayesian classifier. Finally, we propose techniques for reproducing robot speech and locomotion behaviors in a robust way, despite the natural variation of human behaviors and the large amount of sensor noise present in speech recognition. We show our technique in use, training a robot to play the role of a shop clerk in a simple camera shop scenario, and we demonstrate through a comparison experiment that our techniques successfully enabled the generation of socially appropriate speech and locomotion behavior. Notably, the performance of our technique in terms of correct behavior selection was found to be higher than the success rate of speech recognition, indicating its robustness to sensor noise.
机译:最近的人机交互研究(HRI)已经研究了利用人群力量以通过游戏和远程操作界面创建机器人交互逻辑的方法。能够观察现实世界中人与人之间交互的传感器网络提供了可能有价值的可伸缩的交互数据源,可用于设计机器人行为。为此,我们在这里提出了一种完全自动化的方法,用于重现观察到的与机器人的社交互动。所提出的方法包括用于表征在训练交互中观察到的语音和运动,使用聚类来识别典型行为元素并使用已建立的HRI近似模型来识别空间形成的技术。使用朴素的贝叶斯分类器,基于从传感器数据流中捕获的离散动作来学习行为逻辑。最后,我们提出了一种以健壮的方式重现机器人语音和运动行为的技术,尽管人类行为的自然变化和语音识别中存在大量传感器噪声。我们展示了使用中的技术,训练了机器人在简单的照相馆场景中扮演店员的角色,并且我们通过比较实验证明了我们的技术成功地实现了社交上适当的言语和运动行为的产生。值得注意的是,发现我们的技术在正确的行为选择方面的性能要高于语音识别的成功率,这表明其对传感器噪声的鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号