首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Generalization in Artificial Language Learning: Modelling the Propensity to Generalize
【24h】

Generalization in Artificial Language Learning: Modelling the Propensity to Generalize

机译:人工语言学习的概括:建模概括的倾向

获取原文

摘要

Experiments in Artificial Language Learning have revealed much about the cognitive mechanisms underlying sequence and language learning in human adults, in infants and in non-human animals. This paper focuses on their ability to generalize to novel grammatical instances (i.e., instances consistent with a familiarization pattern). Notably, the propensity to generalize appears to be negatively correlated with the amount of exposure to the artificial language, a fact that has been claimed to be contrary to the predictions of statistical models (Pena et al. (2002); Endress and Bonatti (2007)). In this paper, we propose to model generalization as a three-step process, and we demonstrate that the use of statistical models for the first two steps, contrary to widespread intuitions in the ALL-field, can explain the observed decrease of the propensity to generalize with exposure time.
机译:人工语言学习的实验已经揭示了人类成年人的序列和语言学习的认知机制,婴儿和非人动物的语言学习。本文侧重于他们概括到新型语法实例的能力(即,与熟悉模式一致的实例)。值得注意的是,概括的倾向似乎与人工语的暴露量负相关,这一事实已被声称与统计模型的预测相反(Pena等人(2002); Endress和Bonatti(2007年)))。在本文中,我们建议将泛化模拟为三步过程,我们证明了使用统计模型的前两个步骤,与全场中的广泛直觉相反,可以解释观察到的倾向降低通过曝光时间概括。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号