首页> 美国卫生研究院文献>The Journal of Neuroscience >Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions
【2h】

Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions

机译:一致的视觉语音增强了在无噪声情况下连续听觉语音的皮质夹带

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Congruent audiovisual speech enhances our ability to comprehend a speaker, even in noise-free conditions. When incongruent auditory and visual information is presented concurrently, it can hinder a listener's perception and even cause him or her to perceive information that was not presented in either modality. Efforts to investigate the neural basis of these effects have often focused on the special case of discrete audiovisual syllables that are spatially and temporally congruent, with less work done on the case of natural, continuous speech. Recent electrophysiological studies have demonstrated that cortical response measures to continuous auditory speech can be easily obtained using multivariate analysis methods. Here, we apply such methods to the case of audiovisual speech and, importantly, present a novel framework for indexing multisensory integration in the context of continuous speech. Specifically, we examine how the temporal and contextual congruency of ongoing audiovisual speech affects the cortical encoding of the speech envelope in humans using electroencephalography. We demonstrate that the cortical representation of the speech envelope is enhanced by the presentation of congruent audiovisual speech in noise-free conditions. Furthermore, we show that this is likely attributable to the contribution of neural generators that are not particularly active during unimodal stimulation and that it is most prominent at the temporal scale corresponding to syllabic rate (2–6 Hz). Finally, our data suggest that neural entrainment to the speech envelope is inhibited when the auditory and visual streams are incongruent both temporally and contextually.>SIGNIFICANCE STATEMENT Seeing a speaker's face as he or she talks can greatly help in understanding what the speaker is saying. This is because the speaker's facial movements relay information about what the speaker is saying, but also, importantly, when the speaker is saying it. Studying how the brain uses this timing relationship to combine information from continuous auditory and visual speech has traditionally been methodologically difficult. Here we introduce a new approach for doing this using relatively inexpensive and noninvasive scalp recordings. Specifically, we show that the brain's representation of auditory speech is enhanced when the accompanying visual speech signal shares the same timing. Furthermore, we show that this enhancement is most pronounced at a time scale that corresponds to mean syllable length.
机译:一致的视听语音即使在无噪音的情况下也能增强我们理解说话者的能力。当同时出现不一致的听觉和视觉信息时,它可能会妨碍听众的感知,甚至会导致听众感觉到两种方式均未提供的信息。研究这些效应的神经基础的努力通常集中于在空间和时间上一致的离散视听音节的特殊情况,而在自然,连续语音的情况下所做的工作较少。最近的电生理研究表明,使用多变量分析方法可以轻松获得对连续听觉语音的皮质反应措施。在这里,我们将这种方法应用于视听语音,重要的是,提出了一种新颖的框架,用于在连续语音的情况下对多感官整合进行索引。具体来说,我们研究了进行中的视听语音的时间和上下文一致性如何使用脑电图技术影响人类语音包膜的皮质编码。我们证明了语音包络的皮质表示通过在无噪声条件下的全等视听语音的呈现得以增强。此外,我们表明,这很可能归因于在单峰刺激过程中不是特别活跃的神经生成器的贡献,并且在音节速率(2-6 Hz)对应的时间尺度上最为突出。最后,我们的数据表明,当听觉和视觉流在时间和上下文上不一致时,语音包络的神经夹带被抑制。>重要声明在说话者说话时看到他的脸可以极大地帮助理解演讲者在说什么。这是因为说话者的面部动作传递了有关说话者所说信息的信息,而且重要的是,还传达了说话者讲话时的信息。传统上,研究大脑如何利用这种时间关系来组合来自连续听觉和视觉语音的信息是很困难的。在这里,我们介绍一种使用相对便宜且无创头皮记录的新方法。具体来说,我们显示当伴随的视觉语音信号共享相同的时间时,大脑对听觉语音的表示会增强。此外,我们表明,这种增强在对应于平均音节长度的时间尺度上最为明显。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号