首页> 外文学位 >Multi-level audio-visual interactions in speech and language perception.
【24h】

Multi-level audio-visual interactions in speech and language perception.

机译:语音和语言感知中的多层次视听交互。

获取原文
获取原文并翻译 | 示例

摘要

That we perceive our environment as a unified scene rather than individual streams of auditory, visual, and other sensory information has recently provided motivation to move past the long-held tradition of studying these systems separately. Although they are each unique in their transduction organs, neural pathways, and cortical primary areas, the senses are ultimately merged in a meaningful way which allows us to navigate the multisensory world. Investigating how the senses are merged has become an increasingly wide field of research in recent decades, with the introduction and increased availability of neuroimaging techniques. Areas of study range from multisensory object perception to cross-modal attention, multisensory interactions, and integration. This thesis focuses on audio-visual speech perception, with special focus on facilitatory effects of visual information on auditory processing. When visual information is concordant with auditory information, it provides an advantage that is measurable in behavioral response times and evoked auditory fields (Chapter 3) and in increased entrainment to multisensory periodic stimuli reflected by steady-state responses (Chapter 4). When the audio-visual information is incongruent, the combination can often, but not always, combine to form a third, non-physically present percept (known as the McGurk effect). This effect is investigated (Chapter 5) using real word stimuli. McGurk percepts were not robustly elicited for a majority of stimulus types, but patterns of responses suggest that the physical and lexical properties of the auditory and visual stimulus may affect the likelihood of obtaining the illusion. Together, these experiments add to the growing body of knowledge that suggests that audio-visual interactions occur at multiple stages of processing.
机译:我们认为我们的环境是一个统一的场景,而不是听觉,视觉和其他感官信息的个别流,最近推动了人们摆脱长期研究这些系统的传统。尽管它们各自在其转导器官,神经通路和皮层初级区域中都是独特的,但这些感官最终以一种有意义的方式融合在一起,从而使我们能够浏览多感官世界。随着神经成像技术的引入和可用性的提高,近几十年来,研究如何融合感官已成为越来越广泛的研究领域。研究领域包括从多感官物体感知到跨模式注意力,多感官互动和整合。本文着重于视听语音感知,特别关注视觉信息对听觉处理的促进作用。当视觉信息与听觉信息一致时,它提供的优势可在行为反应时间和诱发听觉场(第3章)中测量,并且对稳态反应所反映的多感觉周期刺激的夹带也增加(第4章)。当视听信息不一致时,该组合通常可以但不总是组合以形成第三个非物理存在的感知(称为McGurk效应)。 (第5章)使用真实单词刺激来研究这种效果。对于大多数刺激类型,McGurk知觉并未得到强烈激发,但是反应模式表明,听觉和视觉刺激的物理和词汇特性可能会影响获得错觉的可能性。总之,这些实验增加了越来越多的知识,这表明视听交互发生在处理的多个阶段。

著录项

  • 作者

    Rhone, Ariane E.;

  • 作者单位

    University of Maryland, College Park.;

  • 授予单位 University of Maryland, College Park.;
  • 学科 Language Linguistics.;Biology Neuroscience.
  • 学位 Ph.D.
  • 年度 2011
  • 页码 191 p.
  • 总页数 191
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号