首页> 外文期刊>Attention, perception & psychophysics >Visibility of speech articulation enhances auditory phonetic convergence
【24h】

Visibility of speech articulation enhances auditory phonetic convergence

机译:语音清晰度可增强听觉语音的融合

获取原文
获取原文并翻译 | 示例
           

摘要

Talkers automatically imitate aspects of perceived speech, a phenomenon known as phonetic convergence. Talkers have previously been found to converge to auditory and visual speech information. Furthermore, talkers converge more to the speech of a conversational partner who is seen and heard, relative to one who is just heard (Dias & Rosenblum Perception, 40, 1457-1466, 2011). A question raised by this finding is what visual information facilitates the enhancement effect. In the following experiments, we investigated the possible contributions of visible speech articulation to visual enhancement of phonetic convergence within the noninteractive context of a shadowing task. In Experiment 1, we examined the influence of the visibility of a talker on phonetic convergence when shadowing auditory speech either in the clear or in low-level auditory noise. The results suggest that visual speech can compensate for convergence that is reduced by auditory noise masking. Experiment 2 further established the visibility of articulatory mouth movements as being important to the visual enhancement of phonetic convergence. Furthermore, the word frequency and phonological neighborhood density characteristics of the words shadowed were found to significantly predict phonetic convergence in both experiments. Consistent with previous findings (e.g., Goldinger Psychological Review, 105, 251-279, 1998), phonetic convergence was greater when shadowing low-frequency words. Convergence was also found to be greater for low-density words, contrasting with previous predictions of the effect of phonological neighborhood density on auditory phonetic convergence (e.g., Pardo, Jordan, Mallari, Scanlon, & Lewandowski Journal of Memory and Language, 69, 183-195, 2013). Implications of the results for a gestural account of phonetic convergence are discussed.
机译:说话者会自动模仿感知语音的各个方面,这种现象称为语音融合。以前已经发现说话者可以收敛到听觉和视觉语音信息。此外,与刚刚听到的对话伙伴相比,说话者更能集中于看到和听到的对话伙伴的讲话(Dias&Rosenblum Perception,40,1457-1466,2011)。该发现提出的问题是什么视觉信息有助于增强效果。在下面的实验中,我们研究了在隐藏任务的非交互上下文中可见语音清晰度对语音融合视觉增强的可能贡献。在实验1中,当在清晰或低水平的听觉噪声中遮盖听觉语音时,我们检查了讲话者可见度对语音收敛的影响。结果表明视觉语音可以补偿由于听觉噪声掩盖而降低的收敛性。实验2进一步确定了发音性嘴部运动的可见性对于视觉增强语音融合非常重要。此外,在两个实验中,发现阴影单词的单词频率和语音邻域密度特征可以显着预测语音的收敛性。与先前的发现一致(例如,Goldinger Psychological Review,105,251-279,1998),当遮盖低频单词时,语音收敛更大。还发现低密度词的收敛性更大,这与先前对语音邻域密度对听觉语音收敛的影响的预测相反(例如,Pardo,Jordan,Mallari,Scanlon和Lewandowski Journal of Memory and Language,69,183)。 -195,2013年)。讨论了对语音收敛的手势说明的结果含义。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号