首页> 外文期刊>Journal of Cognitive Neuroscience >Hearing Faces: How the Infant Brain Matches the Face It Sees with the Speech It Hears
【24h】

Hearing Faces: How the Infant Brain Matches the Face It Sees with the Speech It Hears

机译:听力的面孔:婴儿大脑如何将其所见的面孔与所听到的语音相匹配

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory–visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on.
机译:语音并不是纯粹的听觉信号。从大约2个月大起,婴儿就可以正确地将他们听到的元音与适当的铰接脸相匹配。但是,直到4个月大时,才出现行为学证据,这种现象最早可以通过融合听觉刺激和面部提示来产生幻觉(McGurk效应)。为了了解婴儿最初是如何将他们所看到的发音运动与他们听到的声音相匹配的,我们记录了高密度的ERPs,以回应10周龄婴儿一致或不一致的无声发音面部的听觉元音。在第一个实验中,我们确定听觉-视觉整合发生在成年人的感知早期。无论是在视觉上还是在听觉上,前面的元音在时序和地形上的失配响应都相似。在第二个实验中,我们研究了语言(元音感知)和非语言(性别感知)领域的视听整合。我们观察到两种类型的变化在相似的延迟下都存在失配响应。它们的地形显着不同,表明这些特征的交叉模式集成是由两个不同的网络并行计算的。确实,脑源建模显示音素和性别计算分别向左和向右半球侧移,这表明每个半球都具有早期处理偏差。我们还观察到颞骨区域的重复抑制和额叶区域的重复增强。这些结果突显了人类皮层组织在生命的最初几周内保持沟通的复杂性和结构性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号