...
首页> 外文期刊>Computer speech and language >Improved language modelling through better language model evaluation measures
【24h】

Improved language modelling through better language model evaluation measures

机译:通过更好的语言模型评估措施来改善语言建模

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

This paper explores the interaction between a language model's perplexity and its effect on the word error rate of a speech recognition system. Much recent research has indicated that these two measures are not as well correlated as was once thought, and many examples exist of models which have a much lower perplexity than the equivalent N-gram model, yet lead to no improvement in recognition accuracy.
机译:本文探讨了语言模型的困惑及其对语音识别系统单词错误率的影响之间的相互作用。最近的许多研究表明,这两种方法之间的联系不像以前想的那样好,并且存在许多示例,这些模型的困惑度比等效的N-gram模型低得多,但并没有提高识别精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号