首页> 外文会议>IEEE International Conference on Computer and Communications >Robust speech recognition combining cepstral and articulatory features
【24h】

Robust speech recognition combining cepstral and articulatory features

机译:结合倒谱和发音特性的强大语音识别

获取原文

摘要

In this paper, a nonlinear relationship between pronunciation and auditory perception is introduced into speech recognition, and superior robustness is shown in the results. The Extreme Learning Machine mapping the relations was trained with Mocha-TIMIT database. Articulatory Features (AFs) were obtained by the network and MFCCs were fused for training acoustic model-DNN-HMM and GMM-HMM in this experiment. It has an 117.0% relative increment of WER with MFCCs-AFs-GMM-HMM while 125.6% with MFCCs-GMM-HMM And the performance of the model DNN-HMM is better than that of the model GMM-HMM, both with relative and absolute performance.
机译:本文将语音和听觉感知之间的非线性关系引入语音识别中,并在结果中显示了出色的鲁棒性。使用Mocha-TIMIT数据库训练了映射关系的极限学习机。通过网络获得了发音特征(AFs),并融合了MFCC来训练声学模型-DNN-HMM和GMM-HMM。 MFCCs-AFs-GMM-HMM的WER相对增量为117.0 \%,而MFCCs-GMM-HMM的WER相对增量为125.6 \%。DNN-HMM模型的性能优于GMM-HMM模型。相对和绝对表现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号