首页> 外文会议>International Conference on Robotics and Automation >Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots
【24h】

Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots

机译:机器人学习社交技能:针对人形机器人的同声手势生成的端到端学习

获取原文

摘要

Co-speech gestures enhance interaction experiences between humans as well as between humans and robots. Most existing robots use rule-based speech-gesture association, but this requires human labor and prior knowledge of experts to be implemented. We present a learning-based co-speech gesture generation that is learned from 52 h of TED talks. The proposed end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures. The model successfully produces various gestures including iconic, metaphoric, deictic, and beat gestures. In a subjective evaluation, participants reported that the gestures were human-like and matched the speech content. We also demonstrate a co-speech gesture with a NAO robot working in real time.
机译:同语音手势可增强人与人之间以及人与机器人之间的交互体验。现有的大多数机器人都使用基于规则的语音手势关联,但这需要人工和专家的先验知识来实施。我们提出了一个基于学习的协同语音手势生成,该手势语音生成是从TED演讲的52小时中学到的。所提出的端到端神经网络模型由用于语音文本理解的编码器和用于生成手势序列的解码器组成。该模型成功产生了各种手势,包括标志性,隐喻性,指示性和拍打手势。在主观评估中,参与者报告说这些手势类似于人,并且与语音内容匹配。我们还演示了与NAO机器人实时协同手势的手势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号