首页> 外文会议>Annual Conference of the International Speech Communication Association >Model compression applied to small-footprint keyword spotting
【24h】

Model compression applied to small-footprint keyword spotting

机译:模型压缩适用于小型脚印关键字斑点

获取原文

摘要

Several consumer speech devices feature voice interfaces that perform on-device keyword spotting to initiate user interactions. Accurate on-device keyword spotting within a tight CPU budget is crucial for such devices. Motivated by this, we investigated two ways to improve deep neural network (DNN) acoustic models for keyword spotting without increasing CPU usage. First, we used low-rank weight matrices throughout the DNN. This allowed us to increase representational power by increasing the number of hidden nodes per layer without changing the total number of multiplications. Second, we used knowledge distilled from an ensemble of much larger DNNs used only during training. We systematically evaluated these two approaches on a massive corpus of far-field utterances. Alone both techniques improve performance and together they combine to give significant reductions in false alarms and misses without increasing CPU or memory usage.
机译:若干消费者语音设备具有执行On-Device关键字拍摄以启动用户交互的语音接口。 在紧密的CPU预算内,准确的ON-DEVIET关键字在此类设备上至关重要。 由此激励,我们调查了两种方法来改进深度神经网络(DNN)声学模型,用于关键字发现而不会增加CPU使用情况。 首先,我们在整个DNN中使用低秩重量矩阵。 这使我们可以通过增加每层隐藏节点的数量而不改变乘法总数来提高代表性。 其次,我们使用仅在训练期间使用的更大DNN的集合蒸馏出来的知识。 我们系统地评估了这两种方法在巨大的远场话语中。 单独的两种技术都改善了性能,它们组合在一起,以显着减少错误警报和未命中,而不会增加CPU或内存使用情况。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号