首页> 外文会议>International Conference on Speech and Computer >LSTM-Based Language Models for Very Large Vocabulary Continuous Russian Speech Recognition System
【24h】

LSTM-Based Language Models for Very Large Vocabulary Continuous Russian Speech Recognition System

机译:基于LSTM的超大型词汇连续俄语语音识别系统的语言模型

获取原文

摘要

This paper presents language models based on Long Short-Term Memory (LSTM) neural networks for very large vocabulary continuous Russian speech recognition. We created neural networks with various numbers of units in hidden and projection layers using different optimization methods. Obtained LSTM-based language models were used for N-best list rescoring. As well we tested a linear interpolation of LSTM language model with the baseline 3-gram language model and achieved 22% relative reduction of the word error rate with respect to the baseline 3-gram model.
机译:本文介绍了基于长短期记忆(LSTM)神经网络的语言模型,用于非常大的词汇量连续俄语语音识别。我们使用不同的优化方法在隐藏层和投影层中创建了具有各种单位数量的神经网络。将获得的基于LSTM的语言模型用于N个最佳列表的记录。同样,我们用基线3克语言模型测试了LSTM语言模型的线性插值,相对于基线3克模型,单词错误率相对降低了22%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号