...
首页> 外文期刊>Frontiers in Psychology >Echoes of L1 Syllable Structure in L2 Phoneme Recognition
【24h】

Echoes of L1 Syllable Structure in L2 Phoneme Recognition

机译:L2音素识别中L1音节结构的回声

获取原文
           

摘要

Learning to move from auditory signals to phonemic categories is a crucial component of first, second, and multilingual language acquisition. In L1 and simultaneous multilingual acquisition, learners build up phonological knowledge to structure their perception within a language. For sequential multilinguals, this knowledge may support or interfere with acquiring language-specific representations for a new phonemic categorization system. Syllable structure is a part of this phonological knowledge, and language-specific syllabification preferences influence language acquisition, including early word segmentation. As a result, we expect to see language-specific syllable structure influencing speech perception as well. Initial evidence of an effect appears in Ali et al. (2011) , who argued that cross-linguistic differences in McGurk fusion within a syllable reflected listeners’ language-specific syllabification preferences. Building on a framework from Cho and McQueen (2006) , we argue that this could reflect the Phonological-Superiority Hypothesis (differences in L1 syllabification preferences make some syllabic positions harder to classify than others) or the Phonetic-Superiority Hypothesis (the acoustic qualities of speech sounds in some positions make it difficult to perceive unfamiliar sounds). However, their design does not distinguish between these two hypotheses. The current research study extends the work of Ali et al. (2011) by testing Japanese, and adding audio-only and congruent audio-visual stimuli to test the effects of syllabification preferences beyond just McGurk fusion. Eighteen native English speakers and 18 native Japanese speakers were asked to transcribe nonsense words in an artificial language. English allows stop consonants in syllable codas while Japanese heavily restricts them, but both groups showed similar patterns of McGurk fusion in stop codas. This is inconsistent with the Phonological-Superiority Hypothesis. However, when visual information was added, the phonetic influences on transcription accuracy largely disappeared. This is inconsistent with the Phonetic-Superiority Hypothesis. We argue from these results that neither acoustic informativity nor interference of a listener’s phonological knowledge is superior, and sketch a cognitively inspired rational cue integration framework as a third hypothesis to explain how L1 phonological knowledge affects L2 perception.
机译:学习从听觉信号移动到音素类别是第一,第二和多语言习得的重要组成部分。在L1和同时多语言采集中,学习者建立了语音知识,以构建他们在语言中的看法。对于顺序多语言,此知识可以支持或干扰获取新音素分类系统的语言特定表示。音节结构是这种语音知识的一部分,语言特定的音节偏好会影响语言习得,包括早期词分割。结果,我们希望看到不同的语言音节结构,影响语音感知。效果的初始证据显示在Ali等人中。 (2011年),谁认为,在音节反映的听众的语言特定音节偏好中,麦克鲁克融合中的跨语言差异。从Cho和McQueen(2006)的框架上建立,我们认为这可能反映了音韵优势假设(L1音节偏好的差异使得一些音节位置比其他音节位置更难以分类)或语音优势假设(声学素质)一些位置的语音声音使得难以感知陌生的声音)。但是,他们的设计在这两个假设之间没有区别。目前的研究研究扩展了Ali等人的工作。 (2011)通过测试日语,并添加仅限音频和一致的视听刺激,以测试Syllabification偏好超越McGurk Fusion的影响。要求十八母语扬声器和18岁的日语发言者以人工语言录制无意义的词语。英语允许在日语严重限制它们时停止在音节Codas中停止辅音,但两组在阻止Codas中显示了类似的McGurk融合模式。这与语音优势假设不一致。但是,当添加视觉信息时,对转录精度的语音影响很大程度上消失了。这与语音优势假设不一致。从这些结果中争辩说,声音信息的声音知识都不是优越的,并且将认知的理性提​​示集成框架绘制为第三个假设,以解释L1语音知识如何影响L2感知。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号