【24h】

Multimodal word learning from Infant Directed Speech

机译:婴儿定向语音的多模式单词学习

获取原文

摘要

When adults talk to infants they do that in a different way compared to how they communicate with other adults. This kind of infant directed speech (IDS) typically highlights target words using focal stress and utterance final position. Also, speech directed to infants often refers to objects, people and events in the world surrounding the infant. Because of this, the sound sequences the infant hears are very likely to co-occur with actual objects or events in the infant's visual field. In this work we present a model that is able to learn word-like structures from multimodal information sources without any pre-programmed linguistic knowlege, by taking advantage of the characteristics of IDS. The model is implemented on a humanoid robot platform and is able to extract word-like patterns and associating these to objects in the visual surrounding.
机译:当成年人与婴儿交谈时,与他们与其他成年人交流的方式不同。这种婴儿定向语音(IDS)通常使用焦点重音和话语最终位置来突出显示目标词。另外,针对婴儿的讲话通常是指婴儿周围世界中的物体,人和事件。因此,婴儿听到的声音序列很可能与婴儿视野中的实际物体或事件同时发生。在这项工作中,我们提出了一个模型,该模型可以利用IDS的特性,从多模式信息源中学习类似单词的结构,而无需任何预先编程的语言知识。该模型是在人形机器人平台上实现的,能够提取类似单词的图案并将其与视觉环境中的对象相关联。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号