首页> 美国卫生研究院文献>Frontiers in Human Neuroscience >Converging Evidence From Electrocorticography and BOLD fMRI for a Sharp Functional Boundary in Superior Temporal Gyrus Related to Multisensory Speech Processing
【2h】

Converging Evidence From Electrocorticography and BOLD fMRI for a Sharp Functional Boundary in Superior Temporal Gyrus Related to Multisensory Speech Processing

机译:皮质和大胆功能磁共振成像的证据为上感颞回与多感觉语音处理相关的一个尖锐的功能边界。

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker’s mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published). The second dataset consisted of indirect measures of neural activity using blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Both ECoG and fMRI participants viewed the same clear and noisy audiovisual speech stimuli and performed the same speech recognition task. Both techniques demonstrated a sharp functional boundary in the STG, spatially coincident with an anatomical boundary defined by the posterior edge of Heschl’s gyrus. Cortex on the anterior side of the boundary responded more strongly to clear audiovisual speech than to noisy audiovisual speech while cortex on the posterior side of the boundary did not. For both ECoG and fMRI measurements, the transition between the functionally distinct regions happened within 10 mm of anterior-to-posterior distance along the STG. We relate this boundary to the multisensory neural code underlying speech perception and propose that it represents an important functional division within the human speech perception network.
机译:尽管人类可以仅使用听觉方式来理解语音,但在嘈杂的环境中,说话者口中的视觉语音信息可以挽救否则听不清的听觉语音。为了研究多感觉语音感知的神经基质,我们在两个数据集中比较了人类颞上回(STG)的神经活动。一个数据集由植入癫痫患者表面电极的直接神经记录(皮质神经电图,ECoG)组成(该数据集先前已发布)。第二个数据集包括使用血氧水平依赖性功能磁共振成像(BOLD fMRI)间接测量神经活动的方法。 ECoG和fMRI参与者均查看了相同的清晰和嘈杂的视听语音刺激,并执行了相同的语音识别任务。两种技术均在STG中显示出清晰的功能边界,在空间上与赫氏回旋后缘所定义的解剖学边界重合。边界前侧的皮层对清晰的视听语音的响应比对嘈杂的视听语音的响应更强烈,而边界后侧的皮质则没有。对于ECoG和fMRI测量,功能不同区域之间的过渡发生在沿STG前后距离10 mm之内。我们将此边界与语音感知基础的多感觉神经代码相关联,并提出它代表了人类语音感知网络中的重要功能划分。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号