首页> 外文期刊>Journal of Cognitive Neuroscience >Inside Speech: Multisensory and Modality-specific Processing of Tongue and Lip Speech Actions
【24h】

Inside Speech: Multisensory and Modality-specific Processing of Tongue and Lip Speech Actions

机译:内部语音:舌和嘴唇语音动作的多感官和特定于方式的处理

获取原文
获取原文并翻译 | 示例
           

摘要

Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both “audible” and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.
机译:已经发现动作识别不仅依赖于感觉大脑区域,而且部分依赖于观察者的运动系统。然而,一个动作的独特听觉和视觉体验是否能调节感觉运动活动仍是未知之数。在当前的稀疏采样功能磁共振成像研究中,我们确定了在感知舌头和嘴唇的言语行为时感觉和运动表现在何种程度上相互作用。选择舌头和嘴唇的语音动作是因为对话者的舌头运动可以通过其对语音声学的影响来访问,但由于其在声道内的位置而无法看到,而嘴唇的运动既“可见”又可见。向参加者展示听觉,视觉和视听语音动作,其视觉输入与说话者的舌头矢状面或嘴唇运动的面部图相关,这些图像先前由超声成像系统和摄像机记录。尽管涉及视觉视觉语言和视觉面部感知的神经网络大部分重叠,但是在视觉语言感知期间观察到了较强的运动和体感激活。相反,在视觉面部感知期间,在听觉和视觉皮层中发现了较强的活动。作为这些发现的补充,发现左前运动皮层和视觉脑区域的活动分别与视觉语言和视觉面部语音刺激的视觉识别分数相关,而视觉活动与两种刺激的RTs相关。这些结果表明,嘴唇和舌头语音动作的单峰和多峰处理依赖于常见的感觉运动脑区。他们还建议对可听见但不可见的运动进行视觉处理会诱发运动和视觉上对所感知动作的心理模拟,以促进识别和/或学习听觉和视觉信号之间的关联。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号