首页> 美国卫生研究院文献>other >Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception
【2h】

Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

机译:仅听觉的语音感知过程中人脸移动和语音智能区域之间的功能连接

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
机译:已经提出,视觉上已知的说话者的说话脸的内部模拟有助于听觉语音识别。这种观点的一种预测是,即使在仅听觉的聆听条件下,仅听觉语音理解所涉及的大脑区域也会与视觉上的面部移动敏感区域相互作用。在这里,我们使用功能磁共振成像(fMRI)数据的连通性分析来检验该假设。参与者(17名正常参与者,17名发育障碍的患者)首先通过简短的语音脸部或语音占用训练(<2分钟/发言人)学习了6位发言人。接下来是仅听觉的语音识别任务和控制任务(语音识别),其中涉及MRI扫描器中学习到的说话者的声音。如假设的那样,我们发现,在语音识别过程中,对说话者面部的熟悉增加了面部运动敏感的后颞上颞沟(STS)与支持听觉语音清晰度的前STS区域之间的功能连接。正常参与者和prosopagnosics之间没有差异。这是意料之中的,因为先前的研究结果表明,这两个组都使用面部移动敏感的STS来优化仅听觉的语音理解。总体而言,本研究结果表明,学习到的视觉信息已整合到仅听觉语音的分析中,并且这种整合是由与任务相关的面部移动和听觉语音敏感区域的交互作用产生的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号