首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition
【24h】

Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition

机译:基于长短期记忆的深度递归神经网络用于大词汇语音识别

获取原文

摘要

Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed and empirically evaluated on a large vocabulary conversational telephone speech recognition task. Meanwhile, regarding to multi-GPU devices, the training process for LSTM networks is introduced and discussed. Experimental results demonstrate that the deep LSTM networks benefit from the depth and yield the state-of-the-art performance on this task.
机译:最近已显示出基于长短期记忆(LSTM)的声学建模方法,可以在某些语音识别任务上提供最先进的性能。为了进一步提高性能,在这项研究中,考虑到深度分层模型比浅层模型更有效,对LSTM进行了深入扩展。在先前关于构建深度递归神经网络(RNN)的研究的推动下,提出了替代性的深度LSTM体系结构,并在大型词汇会话电话语音识别任务上进行了经验评估。同时,针对多GPU设备,介绍并讨论了LSTM网络的训练过程。实验结果表明,深LSTM网络受益于该深度,并在此任务上具有最先进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号