首页> 外文会议>ACL-05; Association for Computational Linguistics Annual Meeting; 20050625-30; Ann Arbor,MI(US) >Learning Stochastic OT Grammars: A Bayesian approach using Data Augmentation and Gibbs Sampling
【24h】

Learning Stochastic OT Grammars: A Bayesian approach using Data Augmentation and Gibbs Sampling

机译:学习随机OT语法:使用数据增强和Gibbs采样的贝叶斯方法

获取原文
获取原文并翻译 | 示例

摘要

Stochastic Optimality Theory (Boersma, 1997) is a widely-used model in linguistics that did not have a theoretically sound learning method previously. In this paper, a Markov chain Monte-Carlo method is proposed for learning Stochastic OT Grammars. Following a Bayesian framework, the goal is finding the posterior distribution of the grammar given the relative frequencies of input-output pairs. The Data Augmentation algorithm allows one to simulate a joint posterior distribution by iterating two conditional sampling steps. This Gibbs sampler constructs a Markov chain that converges to the joint distribution, and the target posterior can be derived as its marginal distribution.
机译:随机最优性理论(Boersma,1997)是语言学中广泛使用的模型,以前没有理论上合理的学习方法。本文提出了一种马尔可夫链蒙特卡罗方法来学习随机OT语法。遵循贝叶斯框架,目标是在给定输入-输出对的相对频率的情况下,找到语法的后验分布。数据增强算法允许通过重复两个条件采样步骤来模拟关节后部分布。该Gibbs采样器构造了一个马尔可夫链,该链收敛到联合分布,目标后验可以作为其边际分布导出。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号