...
首页> 外文期刊>Frontiers in Psychology >Multi-Talker Speech Promotes Greater Knowledge-Based Spoken Mandarin Word Recognition in First and Second Language Listeners
【24h】

Multi-Talker Speech Promotes Greater Knowledge-Based Spoken Mandarin Word Recognition in First and Second Language Listeners

机译:多讲话者演讲在第一和第二语言侦听器中提升了更大的知识普通话词识别

获取原文
   

获取外文期刊封面封底 >>

       

摘要

Spoken word recognition involves a perceptual tradeoff between the reliance on the incoming acoustic signal and knowledge about likely sound categories and their co-occurrences as words. This study examined how adult second language (L2) learners navigate between acoustic-based and knowledge-based spoken word recognition when listening to highly variable, multi-talker truncated speech, and whether this perceptual tradeoff changes as L2 listeners gradually become more proficient in their L2 after multiple months of structured classroom learning. First language (L1) Mandarin Chinese listeners and L1 English-L2 Mandarin adult listeners took part in a gating experiment. The L2 listeners were tested twice—once at the start of their intermediate/advanced L2 language class and again two months later. L1 listeners were only tested once. Participants were asked to identify syllable-tone words that varied in syllable token frequency (high/low according to a spoken word corpus) and syllable-conditioned tonal probability (most probable/least probable in speech given the syllable). The stimuli were recorded by 16 different talkers and presented at eight gates ranging from onset-only (gate 1) through onset+40 ms increments (gates 2 through 7) to the full word (gate 8). Mixed-effects regression modeling was used to compare performance to our previous study which used single-talker stimuli (Wiener, Lee, & Tao, 2019). The results indicated that multi-talker speech caused both L1 and L2 listeners to rely greater on knowledge-based processing of tone. L1 listeners were able to draw on distributional knowledge of syllable-tone probabilities in early gates and switch to predominantly acoustic-based processing when more of the signal was available. In contrast, L2 listeners, with their limited experience with talker range normalization, were less able to effectively transition from probability-based to acoustic-based processing. Moreover, for the L2 listeners, the reliance on such distributional information for spoken word recognition appeared to be conditioned by the nature of the acoustic signal. Single-talker speech did not result in the same pattern of probability-based tone processing, suggesting that knowledge-based processing of L2 speech may only occur under certain acoustic conditions, such as multi-talker speech.
机译:语音识别涉及依赖于传入的声学信号和关于可能的声音类别的知识和他们的合作措施之间的感知权衡。本研究审查了成年人第二语言(L2)学习者如何在收听高度变量,多讲车截断的语音时在基于声学和知识的语音字识别之间导航,以及这种感知权衡如何变化,因为L2听众逐渐变得更加精通L2经过多个月的结构课堂学习。第一语言(L1)普通话中文听众和L1英语-L2普通话成人听众参加了门控实验。 L2听众在中间/高级L2语言课程开始时进行了两次 - 一次,同时两个月后。 L1听众只测试一次。要求参与者识别以音节令牌频率(根据口语语料库的高/低/低)和音节条件的音调概率(在给定音节的语音中最可能/最不可能的话)中变化的音节音词。刺激由16种不同的讲话者记录,并且以八个栅极呈现,该栅极通过发起+ 40ms增量(门2至7)到全文(门8)。混合效应回归建模用于将性能与我们以前的研究进行比较,该研究使用单讲车刺激(Wiener,Lee,&Tao,2019)。结果表明,多谈话者语音引起L1和L2听众依赖于基于知识的音调处理。 L1听众能够在早期门中汲取音节 - 色调概率的分布知识,并在更多的信号可用时切换到主要的声学处理。相比之下,L2听众具有有限的谈话者范围标准化的经验,不能有效地从基于声学的处理的概率转换。此外,对于L2听众,对语言字形识别的这种分配信息的依赖似乎是由声信号的性质进行调节。单讲车语音没有导致基于概率的音调处理模式,表明L2语音的基于知识的处理可以仅在某些声学条件下发生,例如多讲话者语音。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号