首页> 外文会议>International conference on intelligent text processing and computational linguistics >ASR Hypothesis Reranking Using Prior-Informed Restricted Boltzmann Machine
【24h】

ASR Hypothesis Reranking Using Prior-Informed Restricted Boltzmann Machine

机译:使用事先告知的受限玻尔兹曼机对ASR假设进行重新排序

获取原文

摘要

Discriminative language models (DLMs) have been widely used for reranking competing hypotheses produced by an Automatic Speech Recognition (ASR) system. While existing DLMs suffer from limited generalization power, we propose a novel DLM based on a dis-criminatively trained Restricted Boltzmann Machine (RBM). The hidden layer of the RBM improves generalization and allows for employing additional prior knowledge, including pre-trained parameters and entity-related prior. Our approach outperforms the single-layer-perceptron (SLP) reranking model, and fusing our approach with SLP achieves up to 1.3% absolute Word Error Rate (WER) reduction and a relative 180% improvement in terms of WER reduction over the SLP reranker. In particular, it shows that proposed prior informed RBM reranker achieves largest ASR error reduction (3.1% absolute WER) on content words.
机译:判别语言模型(DLM)已被广泛用于对自动语音识别(ASR)系统产生的竞争假设进行重新排名。尽管现有DLM的泛化能力有限,但我们提出了一种新的DLM,它基于经过严格训练的受限玻尔兹曼机(RBM)。 RBM的隐藏层提高了通用性,并允许采用其他先验知识,包括预先训练的参数和与实体相关的先验知识。我们的方法优于单层感知器(SLP)重新排序模型,并且将我们的方法与SLP融合可实现高达1.3%的绝对单词错误率(WER)降低,并且与SLP重新排序器相比,WER降低了180%。特别是,它表明,建议的事前通知的RBM重排序器在内容词上实现了最大的ASR错误减少(绝对WER的3.1%)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号