首页> 美国卫生研究院文献>other >Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study
【2h】

Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study

机译:视听语音感知的时间整合基础的大规模功能性脑网络:一项脑电图研究

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300–600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our study indicates that the temporal integration underlying multisensory speech perception requires to be understood in the framework of large-scale functional brain network mechanisms in addition to the established cortical loci of multisensory speech perception.
机译:说话人的可观察到的嘴唇运动会影响听觉语音的感知。聆听者报告了这种影响的一个典型示例,当听众出现不协调的视听(AV)语音刺激时,他们会感觉到虚幻的(跨模式)语音(McGurk效果)。 AV语音感知的最新神经影像研究强调了上颞沟(STS)附近的额叶,顶叶和整合性大脑位点对多感觉语音感知的作用。然而,在多感官知觉处理过程中,整个大脑的网络是否以及如何参与仍然是一个悬而未决的问题。我们认为,位于分布的大脑部位的神经群体之间的大规模功能连接可能会提供涉及AV语音处理和融合的有价值的见解。通过与脑电图(EEG)记录一起改变心理生理参数,我们利用了不一致的视听(AV)语音刺激的逐次感性变化来识别大型皮层网络的特征,该网络有助于同步过程中的多感官感知和异步AV语音。我们评估了在不同的AV滞后多感官语音感知过程中脑电信号的频谱格局。使用时频全局相干性来计算所有传感器对的功能连通性动力学,成对相干性的矢量和随时间变化。在同步AV语音过程中,与在刺激发生后300-600 ms的时间窗周围的单感觉感知相比,跨模态(虚幻)感知之下的整体伽马波段相干性增强,而α和β波段相干性降低。在异步语音刺激期间,在更早的时间跨模式感知期间观察到了全局宽带连贯性,同时低频功率的刺激前减小,例如,正AV滞后的α节律和负AV滞后的θ节律。因此,我们的研究表明,除了已建立的多感觉语音知觉的皮质位点以外,还需要在大规模功能性脑网络机制的框架中理解多感觉语音知觉的时间整合。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号