首页> 外文学位 >Contextual bootstrapping for grammar learning.
【24h】

Contextual bootstrapping for grammar learning.

机译:用于语法学习的上下文引导。

获取原文
获取原文并翻译 | 示例

摘要

The problem of grammar learning is a challenging one for both children and machines due to impoverished input: hidden grammatical structures, lack of explicit correction, and in pro-drop languages, argument omission. This dissertation describes a computational model of child grammar learning using a probabilistic version of Embodied Construction Grammar (ECG) that demonstrates how the problem of impoverished input is alleviated through bootstrapping from the situational context. This model represents the convergence of: (1) a unified representation that integrates semantic knowledge, linguistic knowledge, and contextual knowledge, (2) a context-aware language understanding process, and (3) a structured grammar learning and generalization process.;Using situated child-directed utterances as learning input, the model performs two concurrent learning tasks: structural learning of the grammatical units and statistical learning of the associated parameters. The structural learning task is a guided search over the space of possible constructions. The search is informed by embodied semantic knowledge that it has gathered through experience with the world even before learning grammar and situational knowledge that the model obtains from context. The statistical learning task requires continuous updating of the parameters associated with the probabilistic grammar based on usage and these parameters reflect shifting preferences on learned grammatical structures.;The computational model of grammar learning has been validated in two ways. It has been applied to a subset of the CHILDES Beijing corpus, which is a corpus of naturalistic parent-child interaction in Mandarin Chinese. Its learning behavior has also been more closely examined using an artificial miniature language. This learning model provides a precise, computational framework for fleshing out theories of construction formation and generalization.
机译:由于输入不足,语法学习的问题对于儿童和机器来说都是一个挑战性的问题:隐藏的语法结构,缺乏明确的纠正,而在亲语言中,则缺少论点。这篇论文描述了一个使用概率版本的Embodied Construction Grammar(ECG)的儿童语法学习的计算模型,该模型演示了如何通过从情境中引导来缓解贫困的输入问题。该模型表示以下方面的融合:(1)整合语义知识,语言知识和上下文知识的统一表示形式;(2)上下文感知的语言理解过程;以及(3)结构化的语法学习和概括过程。该模型将面向儿童的语音作为学习输入,执行两个并发的学习任务:语法单元的结构学习和关联参数的统计学习。结构学习任务是对可能构造空间的引导搜索。该搜索是通过隐含的语义知识通知的,该语义知识甚至是在学习该模型从上下文获得的语法和情境知识之前就已经通过与世界的经验而积累的。统计学习任务需要根据使用情况不断更新与概率语法相关的参数,这些参数反映了学习的语法结构上的偏好变化。语法学习的计算模型已通过两种方式进行了验证。它已应用于CHILDES Beijing语料库的子集,该语料库是汉语中自然主义亲子互动的语料库。它的学习行为也已经使用一种人造的微型语言进行了更仔细的检查。该学习模型为充实构造形成和泛化理论提供了精确的计算框架。

著录项

  • 作者

    Mok, Eva H.;

  • 作者单位

    University of California, Berkeley.;

  • 授予单位 University of California, Berkeley.;
  • 学科 Psychology Developmental.;Computer Science.;Psychology Cognitive.
  • 学位 Ph.D.
  • 年度 2008
  • 页码 237 p.
  • 总页数 237
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号