【24h】

How People Talk When Teaching a Robot

机译:人们在教授机器人时如何谈论

获取原文

摘要

We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learner's performance history, (2) people do not wait until a robotic learner completes an action before they provide input and (3) people naively and spontaneously use intensely affective vocalizations. Our findings suggest modifications may be needed to traditional machine learning models to better fit observed human tendencies. Our observations of human behavior contradict the popular assumptions made by machine learning algorithms (in particular, reinforcement learning) that the reward function is stationary and path-independent for social learning interactions. We also propose an interaction taxonomy that describes three phases of a human-teacher's vocalizations: direction, spoken before an action is taken; guidance, spoken as the learner communicates an intended action; and feedback, spoken in response to a completed action.
机译:我们研究人类的教师提供给学习者机器人情感发声。在即兴单对一个相互作用,参加者提供声音输入至机器人恐龙作为机器人玩具选择建筑物击倒。我们发现,(1)人的费用取决于学习者的性能履历他们的声音输入,(2)人不等到他们提供输入和(3)在人们面前天真地和自发地用强烈的情感发声机器人学习者完成一个动作。我们的研究结果表明,以更好地适应人类观察到的趋势的修改可能需要传统的机器学习模型。我们人类的行为顶撞通过机器学习算法所做的流行假设的观测(特别是强化学习)的奖励功能是固定的,与路径无关的社会学习互动。我们还提出一个描述人类教师发声的三个阶段的交互分类:方向,一个采取行动之前发言;指导,为口语学习者通信的预期作用;和反馈,响应动作完成发言。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号