首页> 外文会议>International Conference on Human Language Technologies >Using Dependency Grammar Features in Whole Sentence Maximum Entropy Language Model for Speech Recognition
【24h】

Using Dependency Grammar Features in Whole Sentence Maximum Entropy Language Model for Speech Recognition

机译:使用整句依赖语法特征在整句中的最大熵语言模型进行语音识别

获取原文

摘要

In automatic speech recognition, the standard choice for a language model is the well-known n-gram model. The n-grams are used to predict the probability of a word given its n-1 preceding words. However, the n-gram model is not able to explicitly learn grammatical relations of the sentence. In the present work, in order to augment the n-gram model with grammatical features, we apply the Whole Sentence Maximum Entropy framework. The grammatical features are head-modifier relations between pairs of words, together with the labels of the relationships, obtained with the dependency grammar. We evaluate the model in a large vocabulary speech recognition task with Wall Street Journal speech corpus. The results show a substantial improvement in both test set perplexity and word error rate.
机译:在自动语音识别中,语言模型的标准选择是众所周知的N-GRAM模型。 N-克用于预测给出了前面的N-1字的单词的概率。然而,N-GRAM模型无法明确学习句子的语法关系。在目前的工作中,为了增强具有语法特征的n-gram模型,我们应用整个句子最大熵框架。语法特征是单词对的头部修改器关系,以及与依赖语法获得的关系的标签。我们在沃尔街日报言语语料库中评估大词汇语音识别任务的模型。结果显示了测试集困惑和字错误率的大大提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号