首页> 外文期刊>The Journal of Neuroscience: The Official Journal of the Society for Neuroscience >Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions
【24h】

Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions

机译:一致的视觉语音增强了在无噪声条件下持续听觉语音的皮质夹带

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Congruent audiovisual speech enhances our ability to comprehend a speaker, even in noise-free conditions. When incongruent auditory and visual information is presented concurrently, it can hinder a listener's perception and even cause him or her to perceive information that was not presented in either modality. Efforts to investigate the neural basis of these effects have often focused on the special case of discrete audiovisual syllables that are spatially and temporally congruent, with less work done on the case of natural, continuous speech. Recent electrophysiological studies have demonstrated that cortical response measures to continuous auditory speech can be easily obtained using multivariate analysis methods. Here, we apply such methods to the case of audiovisual speech and, importantly, present a novel framework for indexing multisensory integration in the context of continuous speech. Specifically, we examine how the temporal and contextual congruency of ongoing audiovisual speech affects the cortical encoding of the speech envelope in humans using electro-encephalography. We demonstrate that the cortical representation of the speech envelope is enhanced by the presentation of congruent audiovisual speech in noise-free conditions. Furthermore, we show that this is likely attributable to the contribution of neural generators that are not particularly active during unimodal stimulation and that it is most prominent at the temporal scale corresponding to syllabic rate (2-6 Hz). Finally, our data suggest that neural entrainment to the speech envelope is inhibited when the auditory and visual streams are incongruent both temporally and contextually.
机译:一致的视听语音即使在无噪音的情况下也能增强我们理解说话者的能力。当同时出现不一致的听觉和视觉信息时,它可能会妨碍听众的感知,甚至会导致听众感觉到两种方式均未提供的信息。研究这些影响的神经基础的努力通常集中于在空间和时间上一致的离散视听音节的特殊情况,而在自然,连续语音的情况下所做的工作较少。最近的电生理研究表明,使用多变量分析方法可以轻松获得针对连续听觉语音的皮质反应措施。在这里,我们将这种方法应用于视听语音,重要的是,提出了一种新颖的框架,用于在连续语音的情况下对多感官整合进行索引。具体而言,我们研究了进行中的视听语音的时间和上下文一致性如何使用脑电图技术影响人类语音包络的皮质编码。我们证明语音包络的皮质表示通过在无噪声条件下的全等视听语音的呈现得到增强。此外,我们表明这很可能归因于神经生成器的贡献,在单峰刺激期间神经生成器不是特别活跃,并且在音节速率(2-6 Hz)对应的时间尺度上最为突出。最后,我们的数据表明,当听觉和视觉流在时间和语境上不一致时,语音包络的神经夹带被抑制。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号