首页> 外文会议>International Conference on Intelligent Robotics and Systems >Multimodal Word Learning from Infant Directed Speech
【24h】

Multimodal Word Learning from Infant Directed Speech

机译:从婴儿定向演讲学习的多式联字词

获取原文

摘要

When adults talk to infants they do that in a different way compared to how they communicate with other adults. This kind of Infant Directed Speech (IDS) typically highlights target words using focal stress and utterance final position. Also, speech directed to infants often refers to objects, people and events in the world surrounding the infant. Because of this, the sound sequences the infant hears are very likely to co-occur with actual objects or events in the infant's visual field. In this work we present a model that is able to learn word-like structures from multimodal information sources without any pre-programmed linguistic knowlege, by taking advantage of the characteristics of IDS. The model is implemented on a humanoid robot platform and is able to extract word-like patterns and associating these to objects in the visual surrounding.
机译:当成年人与婴儿交谈时,与他们如何与其他成年人沟通相比,他们以不同的方式做到这一点。这种婴儿定向语音(IDS)通常突出显示使用焦距和话语最终位置的目标词。此外,针对婴儿的语音通常是指婴儿周围世界的对象,人员和活动。因此,婴儿听到的声音序列很可能在婴儿视野中的实际对象或事件中共同发生。在这项工作中,我们通过利用ID的特征,提供一种模型,该模型能够从多模式信息来源学习来自多模式信息来源的单词结构。该模型在人形机器人平台上实现,能够提取类似字样的图案并将这些模式与视觉周围的对象相关联。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号