首页> 外文期刊>Psychonomic bulletin & review >The self-advantage in visual speech processing enhances audiovisual speech recognition in noise
【24h】

The self-advantage in visual speech processing enhances audiovisual speech recognition in noise

机译:视觉语音处理的自身优势增强了噪声中的视听语音识别

获取原文
获取原文并翻译 | 示例
           

摘要

Individuals lip read themselves more accurately than they lip read others when only the visual speech signal is available (Tye-Murray et al., Psychonomic Bulletin & Review, 20, 115-119, 2013). This self-advantage for vision-only speech recognition is consistent with the common-coding hypothesis (Prinz, European Journal of Cognitive Psychology, 9, 129-154, 1997), which posits (1) that observing an action activates the same motor plan representation as actually performing that action and (2) that observing one's own actions activates motor plan representations more than the others' actions because of greater congruity between percepts and corresponding motor plans. The present study extends this line of research to audiovisual speech recognition by examining whether there is a self-advantage when the visual signal is added to the auditory signal under poor listening conditions. Participants were assigned to sub-groups for round-robin testing in which each participant was paired with every member of their subgroup, including themselves, serving as both talker and listener/observer. On average, the benefit participants obtained from the visual signal when they were the talker was greater than when the talker was someone else and also was greater than the benefit others obtained from observing as well as listening to them. Moreover, the self-advantage in audiovisual speech recognition was significant after statistically controlling for individual differences in both participants' ability to benefit from a visual speech signal and the extent to which their own visual speech signal benefited others. These findings are consistent with our previous finding of a self-advantage in lip reading and with the hypothesis of a common code for action perception and motor plan representation.
机译:当只有视觉语音信号可用时,与其他人相比,与其他人相比,每个人的嘴唇能更准确地读出自己的声音(Tye-Murray等人,《 Psychonomic Bulletin&Review》,第20卷,第115-119页,2013年)。这种仅凭视觉识别语音的优势与通用编码假设(Prinz,欧洲认知心理学杂志,第9期,第129-154页,1997年)一致,该假设假设(1)观察到一个动作会激活相同的运动计划表示实际执行了该动作;(2)观察自己的动作比其他人的动作更能激活运动计划表示,因为感知和相应的运动计划之间的一致性更高。通过研究在不良听觉条件下将视觉信号添加到听觉信号时是否存在自我优势,本研究将这一研究范围扩展到视听语音识别。参与者被分配到分组中进行轮循测试,其中每个参与者与他们的分组中的每个成员(包括他们自己)配对,同时充当说话者和听者/观察者。平均而言,参与者是说话者时从视觉信号中获得的收益要大于说话者是别人时的收益,并且还大于其他人通过观察和聆听而获得的收益。此外,视听语音识别的自我优势在统计地控制了参加者从视觉语音信号中受益的能力以及他们自己的视觉语音信号对他人的受益程度的个体差异之后,具有显着意义。这些发现与我们先前在唇读中具有自我优势的发现以及符合行动感知和运动计划表示通用代码的假设相一致。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号