...
首页> 外文期刊>The Journal of the Acoustical Society of America >Integration efficiency for speech perception within and across sensory modalities by normal-hearing and hearing-impaired individuals
【24h】

Integration efficiency for speech perception within and across sensory modalities by normal-hearing and hearing-impaired individuals

机译:正常听觉和听觉受损者在感觉模态内部和之间的语音感知整合效率

获取原文
获取原文并翻译 | 示例
           

摘要

In face-to-face speech communication, the listener extracts and integrates information from the acoustic and optic speech signals. Integration occurs within the auditory modality (i.e., across the acoustic frequency spectrum) and across sensory modalities (i.e., across the acoustic and optic signals). The difficulties experienced by some hearing-impaired listeners in understanding speech could be attributed to losses in the extraction of speech information, the integration of speech cues, or both. The present study evaluated the ability of normal-hearing and hearing- impaired listeners to integrate speech information within and across sensory modalities in order to determine the degree to which integration efficiency may be a factor in the performance of hearing-impaired listeners.. Auditory-visual nonsense syllables consisting of eighteen medial consonants surrounded by the vowel [a] were processed into four nonoverlapping acoustic filter bands between 300 and 6000 Hz. A variety of one, two, three, and four filter-band combinations were presented for identification in auditory-only and auditory-visual conditions: A visual-only condition was also included. Integration efficiency was evaluated using a model of optimal integration. Results showed that normal-hearing and hearing-impaired listeners integrated information across the auditory and visual sensory modalities with a high degree of efficiency, independent of differences in auditory capabilities. However, across-frequency integration for auditory-only input was less efficient for hearing-impaired listeners. These individuals exhibited particular difficulty extracting information from the highest frequency band (4762-6000 Hz) when speech information was presented concurrently in the next lower-frequency band (1890-2381 Hz). Results suggest that integration of speech information within the auditory modality, but not across auditory and visual modalities, affects speech understanding in hearing-impaired listeners.
机译:在面对面的语音通信中,听众从声音和视觉语音信号中提取并整合信息。整合发生在听觉模态内(即,跨越声频谱)和跨感觉模态(即,跨越声和光信号)。一些听力受损的听众在理解语音时遇到的困难可能归因于语音信息的提取,语音提示的整合或两者的损失。本研究评估了正常听力和听力受损的听众在感觉模态之内和之间整合语音信息的能力,以确定整合效率可能是听力受损听众表现的一个因素。由18个由元音[a]包围的中间辅音组成的视觉无意义音节被处理成300至6000 Hz之间的四个不重叠的声学滤波器带。提出了多种1、2、3和4个滤带组合,用于在仅听觉和听觉视觉条件下进行识别:还包括仅视觉条件。使用最佳集成模型评估集成效率。结果表明,正常听觉和听力受损的听众可以高效地整合跨听觉和视觉感觉模式的信息,而与听觉能力的差异无关。但是,仅听觉输入的全频集成对于听力受损的听众而言效率较低。当在下一个较低频段(1890-2381 Hz)同时显示语音信息时,这些人表现出从最高频段(4762-6000 Hz)提取信息的特殊困难。结果表明,语音信息在听觉方式内的整合,而不是在听觉和视觉方式之间的整合,会影响听力受损的听众的语音理解。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号