首页> 外文会议>Annual meeting of the Association for Computational Linguistics;ACL 2011 >Enhancing Language Models in Statistical Machine Translation with Backward N-grams and Mutual Information Triggers
【24h】

Enhancing Language Models in Statistical Machine Translation with Backward N-grams and Mutual Information Triggers

机译:使用反向N-gram和互信息触发器增强统计机器翻译中的语言模型

获取原文

摘要

In this paper, with a belief that a language model that embraces a larger context provides better prediction ability, we present two extensions to standard n-gram language models in statistical machine translation: a backward language model that augments the conventional forward language model, and a mutual information trigger model which captures long-distance dependencies that go beyond the scope of standard n-gram language models. We integrate the two proposed models into phrase-based statistical machine translation and conduct experiments on large-scale training data to investigate their effectiveness. Our experimental results show that both models are able to significantly improve translation quality and collectively achieve up to 1 BLEU point over a competitive baseline.
机译:在本文中,我们相信包含较大上下文的语言模型可以提供更好的预测能力,因此我们对统计机器翻译中的标准n-gram语言模型进行了两种扩展:一种是向后语言模型,它扩展了传统的向前语言模型,并且一个互信息触发模型,该模型捕获了超出标准n-gram语言模型范围之外的长距离依赖关系。我们将两个提议的模型集成到基于短语的统计机器翻译中,并在大规模训练数据上进行实验以研究其有效性。我们的实验结果表明,这两种模型均能够显着提高翻译质量,并在竞争基准上共同获得高达1个BLEU的积分。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号