首页> 外文会议>Annual conference of the International Speech Communication Association >Audiovisual discrimination of CV syllables: a simultaneous fMRI-EEG study
【24h】

Audiovisual discrimination of CV syllables: a simultaneous fMRI-EEG study

机译:CV音节的视听辨别:fMRI-EEG同步研究

获取原文

摘要

We carried out a simultaneous fMRI-EEG experiment based on discriminating syllabic minimal pairs involving three phonological contrasts characterized by different degrees of visual distinctiveness (vocalic labialization, consonantal place of articulation or voicing). Audiovisual CV syllable pairs were presented either with a static facial configuration or with a dynamic display of articulatory movements. In the sound disturbed MRI environment, the significant improvement of syllabic discrimination achieved in the dynamic audiovisual modality, compared to the static audiovisual modality was associated with activation of the occipito-temporal cortex (MT + V5) bilaterally, and of the left premotor cortex. MT + V5 was activated in response to facial movements independently of their relation to speech, the latter was specifically activated by phonological discrimination. Significant ERP's to syllabic discrimination were recorded around 150 and 250 ms. Our results provide arguments for the involvement of the speech motor cortex in phonological discrimination, and suggest a multimodal representation of speech units.
机译:我们基于辨别音节最小对的过程进行了同时的fMRI-EEG实验,该最小对涉及三个语音对比,其特征在于不同程度的视觉独特性(发声唇音,发音或发声的辅音位置)。视听CV音节对以静态面部配置或动态显示发音运动的形式呈现。在受到声音干扰的MRI环境中,与静态视听方式相比,动态视听方式中音节辨别力的显着改善与双侧枕颞皮质(MT + V5)的激活以及左前运动皮层的激活有关。 MT + V5响应于面部动作而独立于语音与语音的关系而被激活,后者是通过语音识别来特别激活的。在150和250毫秒左右记录了ERP对音节的辨别力。我们的结果为语音运动皮层参与语音识别提供了论据,并提出了语音单元的多模态表示。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号