首页> 外文会议>Intelligent Robots and Systems, 2001. Proceedings. 2001 IEEE/RSJ International Conference on >Human-robot interaction through real-time auditory and visual multiple-talker tracking
【24h】

Human-robot interaction through real-time auditory and visual multiple-talker tracking

机译:通过实时听觉和视觉多方对话者跟踪进行人机交互

获取原文

摘要

Nakadai et al. (2001) have developed a real-time auditory and visual multiple-talker tracking technique. In this paper, this technique is applied to human-robot interaction including a receptionist robot and a companion robot at a party. The system includes face identification, speech recognition, focus-of-attention control, and sensorimotor task in tracking multiple talkers. The system is implemented on a upper-torso humanoid and the talker tracking is attained by distributed processing on three nodes connected by 100Base-TX network. The delay of tracking is 200 msec. Focus-of-attention is controlled by associating auditory and visual streams by using the sound source direction and talker position as a clue. Once an association is established, the humanoid keeps its face to the direction of the associated talker.
机译:中代等。 (2001年)已经开发了一种实时听觉和视觉多说话者跟踪技术。在本文中,该技术被应用于人机交互,包括聚会中的接待机器人和伴侣机器人。该系统包括面部识别,语音识别,注意力集中控制以及跟踪多个讲话者的感觉运动任务。该系统在上躯干类人动物上实现,并且通过在通过100Base-TX网络连接的三个节点上进行分布式处理来实现对讲话者的跟踪。跟踪延迟为200毫秒。通过使用声源方向和讲话者位置作为线索,通过关联听觉和视觉流来控制注意力的集中。建立关联后,人形机器人将其脸部保持在关联说话者的方向。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号