首页> 外文OA文献 >When eye meets ear : an investigation of audiovisual speech and non-speech perception in younger and older adults
【2h】

When eye meets ear : an investigation of audiovisual speech and non-speech perception in younger and older adults

机译:当眼睛碰到耳朵时:对年轻人和老年人的视听语音和非语音感知的调查

摘要

This dissertation addressed important questions regarding audiovisual (AV) perception. Study 1 revealed that AV speech perception modulated auditory processes, whereas AV non-speech perception affected visual processes. Interestingly, stimulus identification improved, yet fewer neural resources, as reflected in smaller event-related potentials, were recruited, indicating that AV perception led to multisensory efficiency. Also, AV interaction effects were observed of early and late stages, demonstrating that multisensory integration involved a neural network. Study 1 showed that multisensory efficiency is a common principle in AV speech and non-speech stimulus recognition, yet it is reflected in different modalities, possibly due to sensory dominance of a given task. Study 2 extended our understanding of multisensory interaction by investigating electrophysiological processes of AV speech perception in noise and whether those differ between younger and older adults. Both groups revealed multisensory efficiency. Behavioural performance improved while the auditory N1 amplitude was reduced during AV relative to unisensory speech perception. This amplitude reduction could be due to visual speech cues providing complementary information, therefore reducing processing demands for the auditory system. AV speech stimuli also led to an N1 latency shift, suggesting that auditory processing was faster during AV than during unisensory trials. This shift was more pronounced in older than in younger adults, indicating that older adults made more effective use of visual speech. Finally, auditory functioning predicted the degree of the N1 latency shift, which is consistent with the inverse effectiveness hypothesis which argues that the less effective the unisensory perception was, the larger was the benefit derived from AV speech cues. These results suggest that older adults were better "lip/speech" integrators than younger adults, possibly to compensate for age-related sensory deficiencies. Multisensory efficiency was evident in younger and older adults but it might be particularly relevant for older adults. If visual speech cues could alleviate sensory perceptual loads, the remaining neural resources could be allocated to higher level cognitive functions. This dissertation adds further support to the notion of multisensory interaction modulating sensory-specific processes and it introduces the concept of multisensory efficiency as potential principle underlying AV speech and non-speech perception
机译:本文解决了有关视听(AV)感知的重要问题。研究1显示,AV语音感知可调节听觉过程,而AV非语音感知会影响视觉过程。有趣的是,刺激识别得到了改善,但招募了较少的神经资源,反映在较小的事件相关电位中,表明AV感知导致了多感官效率。此外,在早期和晚期都观察到了房室交互作用,表明多感觉整合涉及神经网络。研究1表明,多感官效率是AV语音和非语音刺激识别中的普遍原则,但它反映在不同的方式中,这可能是由于给定任务的感官优势所致。研究2通过研究噪声中AV语音感知的电生理过程以及年轻人与老年人之间是否存在差异,扩展了我们对多感觉交互的理解。两组均显示多感官效率。相对于单感觉语音知觉,AV期间听觉N1振幅降低时,行为表现得到改善。这种幅度降低可能是由于视觉语音提示提供了补充信息,因此降低了对听觉系统的处理需求。 AV语音刺激还导致N1潜伏期的变化,这表明AV期间的听觉处理比单感觉试验更快。这种变化在老年人中比在年轻人中更为明显,表明老年人更有效地利用了视觉语言。最后,听觉功能预测了N1潜伏期的变化程度,这与反效果假设相一致,后者认为单感觉感知的效果越差,从AV语音提示中获得的好处就越大。这些结果表明,老年人比年轻人更好的“嘴唇/语音”整合器,可能是为了弥补与年龄有关的感觉缺陷。多感官效率在年轻人和老年人中均很明显,但对老年人而言尤其重要。如果视觉语音提示可以减轻感官知觉负担,则可以将剩余的神经资源分配给更高级别的认知功能。本论文为多感官交互调节感官特有过程的概念提供了进一步的支持,并引入了多感官效率的概念,作为视听语音和非语音感知的潜在原理。

著录项

  • 作者

    Winneke Axel H;

  • 作者单位
  • 年度 2009
  • 总页数
  • 原文格式 PDF
  • 正文语种 en
  • 中图分类
  • 入库时间 2022-08-31 14:42:50

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号