首页> 外文会议>2011 IEEE Workshop on Automatic Speech Recognition amp; Understanding >Efficient discriminative training of long-span language models
【24h】

Efficient discriminative training of long-span language models

机译:大跨度语言模型的有效判别训练

获取原文
获取原文并翻译 | 示例

摘要

Long-span language models, such as those involving syntactic dependencies, produce more coherent text than their n-gram counterparts. However, evaluating the large number of sentence-hypotheses in a packed representation such as an ASR lattice is intractable under such long-span models both during decoding and discriminative training. The accepted compromise is to rescore only the N-best hypotheses in the lattice using the long-span LM. We present discriminative hill climbing, an efficient and effective discriminative training procedure for long-span LMs based on a hill climbing rescoring algorithm [1]. We empirically demonstrate significant computational savings as well as error-rate reduction over N-best training methods in a state of the art ASR system for Broadcast News transcription.
机译:长跨语言模型(例如涉及句法依赖性的模型)比n-gram对应词产生的连贯性更高。但是,在这样的大跨度模型下,无论是在解码还是在判别训练中,都难以对诸如ASR格之类的打包表示形式中的大量句子假设进行评估。公认的折衷方法是使用大跨度LM仅重新计算晶格中的N个最佳假设。我们提出了判别性爬坡,一种基于爬坡记录算法的大跨度LM的有效而有效的判别训练程序[1]。我们以经验证明,在最新的广播新闻转录ASR系统中,与N个最佳训练方法相比,可节省大量计算量并降低错误率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号