...
首页> 外文期刊>Journal of vision >The Influence of Emotion on Audiovisual Integration in the McGurk Effect
【24h】

The Influence of Emotion on Audiovisual Integration in the McGurk Effect

机译:麦克古尔效应中情绪对视听整合的影响

获取原文
   

获取外文期刊封面封底 >>

       

摘要

In the McGurk Effect, cross-modally discrepant auditory and visual speech information is resolved into a unified percept. For example, the sound of a person articulating "ba" paired with a video display of a person articulating "ga," typically creates the heard percept "da." Furthermore, the McGurk Effect is robust to certain variables, including cross-modally incongruent gender and whether the stimuli are spoken or sung, but is affected by other variables, such as whether the audiovisually inconsistent phoneme creates a word or non-word. We tested the influence of emotion on the McGurk Effect. In Experiment 1, we recorded the audiovisual utterances of a model articulating /ba/, /da/, and /ga/ using happy, mad, sad, and neutral tones of voice and facial gestures. In experimental trials, auditory /ba/ was dubbed onto visual /ga/ to create McGurk stimuli typically heard as /da/. Emotion stimuli were included in the auditory channel, the visual channel, neither channel, or both channels. The comparison of interest was the strength of the McGurk effect between stimuli with and without emotion. Experiment 2 tested the strength of the McGurk Effect using the same stimuli as before, but with a reduction in available emotion information. We reduced visible emotion information by masking the visual stimuli so only the articulatory gestures of the mouth were visible. We found that the strength of the McGurk Effect is reduced by emotional expressions (p 0.001). Furthermore, when we reduced the amount of visible emotion information in our stimuli in Experiment 2, the strength of the McGurk Effect was equivalent across all stimuli. Results may suggest that emotion information drains perceptual resources used in the audiovisual integration of speech. Findings will be discussed in light of the idea that the objects of perception in both cases may be the intended gestures of the communicator.
机译:在McGurk效应中,跨模态的听觉和视觉语音信息被分解为一个统一的感知。例如,将发音为“ ba”的人的声音与发音为“ ga”的人的视频显示配对,通常会创建听觉感知“ da”。此外,McGurk效应对某些变量具有鲁棒性,包括跨模态不一致的性别以及是否说出或演唱了刺激,但受到其他变量的影响,例如视听不一致的音素会产生单词还是非单词。我们测试了情绪对麦格克效应的影响。在实验1中,我们使用语音和面部手势的快乐,疯狂,悲伤和中立的语调记录了表达/ ba /,/ da /和/ ga /的模型的视听话语。在实验性试验中,听觉/ ba /被称为视觉/ ga /,以创建通常称为/ da /的McGurk刺激。情感刺激包括在听觉通道,视觉通道,两个通道或两个通道中。兴趣的比较是有情感和无情感刺激之间的麦克古尔效应的强度。实验2使用与以前相同的刺激测试了麦克古尔效应的强度,但是减少了可用的情绪信息。我们通过遮盖视觉刺激来减少可见的情感信息,因此只有嘴的关节姿态可见。我们发现,情绪表达会降低麦克古尔效应的强度(p <0.001)。此外,在实验2中,当我们减少了刺激中可见的情感信息量时,麦克古尔效应的强度在所有刺激中都是等效的。结果可能表明,情感信息消耗了语音视听整合中使用的感知资源。将根据以下思想来讨论发现:在两种情况下,感知的对象可能是通信者的预期手势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号