...
首页> 外文期刊>NeuroImage >Phonetic processing areas revealed by sinewave speech and acoustically similar non-speech.
【24h】

Phonetic processing areas revealed by sinewave speech and acoustically similar non-speech.

机译:正弦波语音和声学上类似的非语音显示的语音处理区域。

获取原文
获取原文并翻译 | 示例
           

摘要

The neural substrates underlying speech perception are still not well understood. Previously, we found dissociation of speech and nonspeech processing at the earliest cortical level (AI), using speech and nonspeech complexity dimensions. Acoustic differences between speech and nonspeech stimuli in imaging studies, however, confound the search for linguistic-phonetic regions. Presently, we used sinewave speech (SWsp) and nonspeech (SWnon), which replace speech formants with sinewave tones, in order to match acoustic spectral and temporal complexity while contrasting phonetics. Chord progressions (CP) were used to remove the effects of auditory coherence and object processing. Twelve normal RH volunteers were scanned with fMRI while listening to SWsp, SWnon, CP, and a baseline condition arranged in blocks. Only two brain regions, in bilateral superior temporal sulcus, extending more posteriorly on the left, were found to prefer the SWsp condition after accounting for acoustic modulation and coherence effects. Two regions responded preferentially to the more frequency-modulated stimuli, including one that overlapped the right temporal phonetic area and another in the left angular gyrus far from the phonetic area. These findings are proposed to form the basis for the two subtypes of auditory word deafness. Several brain regions, including auditory and non-auditory areas, preferred the coherent auditory stimuli and are likely involved in auditory object recognition. The design of the current study allowed for separation of acoustic spectrotemporal, object recognition, and phonetic effects resulting in distinct and overlapping components.
机译:语音感知的神经底物仍未得到很好的理解。以前,我们使用语音和非语音复杂度维度在最早的皮质级别(AI)上发现了语音和非语音处理的分离。但是,在影像学研究中,语音和非语音刺激之间的声学​​差异使对语言语音区域的搜索感到困惑。目前,我们使用正弦波语音(SWsp)和非语音(SWnon)(它们用正弦波音调代替语音共振峰),以匹配声学频谱和时间复杂度,同时对比语音。和弦进行(CP)用于消除听觉连贯性和对象处理的影响。 12名正常RH志愿者在听SWsp,SWnon,CP和以分组形式排列的基线状况的同时进行了功能磁共振成像扫描。在考虑到声音调制和相干效应后,发现只有两个大脑区域位于双侧颞上沟,向左延伸得更靠后。两个区域优先对频率调制更大的刺激做出响应,包括一个与右颞音区域重叠的区域,以及一个在远离语音区域的左角回中的区域。这些发现被提议为听觉聋的两种亚型奠定基础。包括听觉和非听觉区域在内的几个大脑区域更喜欢连贯的听觉刺激,并且可能参与听觉对象识别。当前研究的设计允许分离声学频谱时变,对象识别和语音效果,从而导致不同且重叠的组件。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号