首页> 外文期刊>The Journal of the Acoustical Society of America >Visual-tactile integration in speech perception: Evidence for modality neutral speech primitives
【24h】

Visual-tactile integration in speech perception: Evidence for modality neutral speech primitives

机译:视觉感知中的视触融合:情态中性语音原语的证据

获取原文
获取原文并翻译 | 示例
           

摘要

Audio-visual [McGurk and MacDonald (1976). Nature 264, 746-748] and audio-tactile [Gick and Derrick (2009). Nature 462(7272), 502-504] speech stimuli enhance speech perception over audio stimuli alone. In addition, multimodal speech stimuli form an asymmetric window of integration that is consistent with the relative speeds of the various signals [Munhall, Gribble, Sacco, and Ward (1996). Percept. Psychophys. 58(3), 351-362; Gick, Ikegami, and Derrick (2010). J. Acoust. Soc. Am. 128(5), EL342-EL346]. In this experiment, participants were presented video of faces producing /pa/ and /ba/ syllables, both alone and with air puffs occurring synchronously and at different timings up to 300 ms before and after the stop release. Perceivers were asked to identify the syllable they perceived, and were more likely to respond that they perceived /pa/ when air puffs were present, with asymmetrical preference for puffs following the video signal-consistent with the relative speeds of visual and air puff signals. The results demonstrate that visual-tactile integration of speech perception occurs much as it does with audio-visual and audio-tactile stimuli. This finding contributes to the understanding of multimodal speech perception, lending support to the idea that speech is not perceived as an audio signal that is supplemented by information from other modes, but rather that primitives of speech perception are, in principle, modality neutral. (C) 2016 Acoustical Society of America.
机译:视听[McGurk和MacDonald(1976)。 Nature 264,746-748]和触觉[Gick and Derrick(2009)。 Nature 462(7272),502-504]语音刺激比单独的音频刺激增强了语音感知。另外,多模态语音刺激形成不对称的积分窗口,该窗口与各种信号的相对速度一致[Munhall,Gribble,Sacco和Ward(1996)。知觉精神病学。 58(3),351-362; Gick,Ikegami和Derrick(2010)。 J. Acoust。 Soc。上午。 128(5),EL342-EL346]。在此实验中,向参与者展示了产生/ pa /和/ ba /音节的面孔的视频,这些面孔单独出现,并且在停止释放之前和之后的不同时间(最多300毫秒)同步发生吹气。要求感知者识别他们感知到的音节,并且当出现吹气时,他们更有可能回答他们感知到的/ pa /,视频信号之后的吹气不对称偏好与视觉和吹气信号的相对速度一致。结果表明,语音感知的视觉-触觉整合与视听和音频-触觉刺激一样多。这一发现有助于理解多模态语音,为以下观点提供了支持:语音不被感知为音频信号,而其他模式的信息补充了语音,但是语音感知的基元在原则上是模态中立的。 (C)2016年美国声学学会。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号