...
首页> 外文期刊>Frontiers in Psychology >How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?
【24h】

How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

机译:视听途径如何提高盲人时间压缩语音的时间分辨率?

获取原文
           

摘要

In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for “reading” texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the “bottleneck” for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.
机译:在盲人中,视觉通道无法通过唇读或视觉韵律帮助进行面对面的交流。然而,由于视觉系统与(1)听觉系统,(2)超模态表示和(3)额叶动作相关区域的交叉链接,视觉系统可能会增强对听觉信息的评估。除了例如对空间或语音表示的处理的反馈或自上而下的支持外,实验数据还显示,视觉系统可以在更基本的计算阶段(如时间信号分辨率)影响听觉。例如,与目击者相比,盲人对后向掩盖的抵抗力更高,并且这种能力似乎与视觉皮层的活动有关。关于连续语音的理解,盲人可以学习使用加速的文本转语音系统以超快的语速(> 16个音节/秒)“阅读”文本,远远超出了正常的6个音节/秒的范围。一项功能磁共振成像研究表明,除其他大脑区域外,这种能力与双侧肺泡,右视皮层和左辅助运动区的BOLD反应显着相关。此外,脑磁图测量显示右枕皮质中的特定成分与加速语音的音节发作相锁定。在有视力的人中,理解时间压缩语音的“瓶颈”似乎与缓冲语音材料的更高要求有关,并且大概与额叶大脑结构有关。另一方面,克服该瓶颈的功能的神经生理学关联似乎取决于早期视觉皮层活动。本假设和理论论文概述了一个旨在将这些数据绑定在一起的模型,该模型基于各种空间,时间和物体识别过程中跨模式调节的各种视听实验中已知的早期跨模式路径。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号