首页> 外文会议>Workshop on Automatic Speech Recognition and Understanding >ACCELERATING RECURRENT NEURAL NETWORK TRAINING VIA TWO STAGE CLASSES AND PARALLELIZATION
【24h】

ACCELERATING RECURRENT NEURAL NETWORK TRAINING VIA TWO STAGE CLASSES AND PARALLELIZATION

机译:通过两级阶段和并行化加速经常性神经网络培训

获取原文

摘要

Recurrent neural network (RNN) language models have proven to be successful to lower the perplexity and word error rate in automatic speech recognition (ASR). However, one challenge to adopt RNN language models is due to their heavy computational cost in training. In this paper, we propose two techniques to accelerate RNN training: 1) two stage class RNN and 2) parallel RNN training. In experiments on Microsoft internal short message dictation (SMD) data set, two stage class RNNs and parallel RNNs not only result in equal or lower WERs compared to original RNNs but also accelerate training by 2 and 10 times respectively. It is worth noting that two stage class RNN speedup can also be applied to test stage, which is essential to reduce the latency in real time ASR applications.
机译:经常性神经网络(RNN)语言模型已被证明是成功的,以降低自动语音识别(ASR)中的困惑和字错误率。然而,采用RNN语言模型的一个挑战是由于他们的训练中的繁重计算成本。在本文中,我们提出了两种加速RNN培训的技术:1)两级阶段RNN和2)并行RNN培训。在Microsoft内部短消息检测(SMD)数据集的实验中,与原始RNN相比,两阶段RNN和并行RNN不仅导致等于或更低的WERS,而且分别加速训练2和10次。值得注意的是,两个阶段类RNN加速度也可以应用于测试阶段,这对于在实时ASR应用中降低延迟至关重要。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号