首页> 美国卫生研究院文献>other >Improving Robustness of Deep Neural Network Acoustic Models via Speech Separation and Joint Adaptive Training
【2h】

Improving Robustness of Deep Neural Network Acoustic Models via Speech Separation and Joint Adaptive Training

机译:通过语音分离和联合自适应训练提高深度神经网络声学模型的鲁棒性

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Although deep neural network (DNN) acoustic models are known to be inherently noise robust, especially with matched training and testing data, the use of speech separation as a frontend and for deriving alternative feature representations has been shown to improve performance in challenging environments. We first present a supervised speech separation system that significantly improves automatic speech recognition (ASR) performance in realistic noise conditions. The system performs separation via ratio time-frequency masking; the ideal ratio mask (IRM) is estimated using DNNs. We then propose a framework that unifies separation and acoustic modeling via joint adaptive training. Since the modules for acoustic modeling and speech separation are implemented using DNNs, unification is done by introducing additional hidden layers with fixed weights and appropriate network architecture. On the CHiME-2 medium-large vocabulary ASR task, and with log mel spectral features as input to the acoustic model, an independently trained ratio masking frontend improves word error rates by 10.9% (relative) compared to the noisy baseline. In comparison, the jointly trained system improves performance by 14.4%. We also experiment with alternative feature representations to augment the standard log mel features, like the noise and speech estimates obtained from the separation module, and the standard feature set used for IRM estimation. Our best system obtains a word error rate of 15.4% (absolute), an improvement of 4.6 percentage points over the next best result on this corpus.
机译:尽管众所周知,深层神经网络(DNN)声学模型具有固有的噪声鲁棒性,尤其是在具有匹配的训练和测试数据的情况下,但已证明使用语音分离作为前端并派生替代特征表示可提高在挑战性环境中的性能。我们首先提出一种监督式语音分离系统,该系统可在现实的噪声条件下显着提高自动语音识别(ASR)性能。该系统通过比率时频掩蔽进行分离;理想比率掩码(IRM)使用DNN估算。然后,我们提出了一个框架,该框架通过联合自适应训练来统一分离和声学建模。由于用于声学建模和语音分离的模块是使用DNN实现的,因此可以通过引入具有固定权重和适当网络体系结构的其他隐藏层来实现统一。在CHiME-2中型词汇ASR任务上,并将log mel频谱特征作为声学模型的输入,与噪声基线相比,独立训练的比率掩盖前端可将单词错误率提高10.9%(相对)。相比之下,共同培训的系统将绩效提高了14.4%。我们还尝试了替代性特征表示,以增强标准log mel特征,例如从分离模块获得的噪声和语音估计,以及用于IRM估计的标准特征集。我们的最佳系统的单词错误率达到15.4%(绝对值),比该语料库的次佳结果提高了4.6个百分点。

著录项

  • 期刊名称 other
  • 作者

    Arun Narayanan; DeLiang Wang;

  • 作者单位
  • 年(卷),期 -1(23),1
  • 年度 -1
  • 页码 92–101
  • 总页数 28
  • 原文格式 PDF
  • 正文语种
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号