首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >Minimum word error training of long short-term memory recurrent neural network language models for speech recognition
【24h】

Minimum word error training of long short-term memory recurrent neural network language models for speech recognition

机译:长短期记忆递归神经网络语言模型用于语音识别的最小单词错误训练

获取原文

摘要

This paper describes minimum word error (MWE) training of recurrent neural network language models (RNNLMs) for speech recognition. RNNLMs are usually trained to minimize a cross entropy of estimated word probabilities against the correct word sequence, which corresponds to maximum likelihood criterion. However, this training does not necessarily maximize a performance measure in a target task, i.e. it does not minimize word error rate (WER) explicitly in speech recognition. To solve such a problem, several discriminative training methods have already been proposed for n-gram language models, but those for RNNLMs have not sufficiently investigated. In this paper, we propose a MWE training method for RNNLMs, and report significant WER reductions when we applied the MWE method to a standard Elman-type RNNLM and a more advanced model, a Long Short-Term Memory (LSTM) RNNLM. We also present efficient MWE training with N-best lists on Graphics Processing Units (GPUs).
机译:本文介绍了用于语音识别的递归神经网络语言模型(RNNLM)的最小单词错误(MWE)训练。通常对RNNLM进行训练,以使估计单词概率与正确单词序列的交叉熵最小,该正确熵对应于最大似然准则。但是,这种训练并不一定使目标任务中的性能度量最大化,即,它没有使语音识别中的单词错误率(WER)显着最小化。为了解决这个问题,已经提出了几种判别性训练方法用于n-gram语言模型,但是对于RNNLM的训练方法还没有得到足够的研究。在本文中,我们提出了一种用于RNNLM的MWE训练方法,当将MWE方法应用于标准Elman型RNNLM和更高级的模型(长短期记忆(LSTM)RNNLM)时,报告了WER的显着降低。我们还将在图形处理单元(GPU)上以N个最佳列表提供有效的MWE培训。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号