首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >A transfer learning and progressive stacking approach to reducing deep model sizes with an application to speech enhancement
【24h】

A transfer learning and progressive stacking approach to reducing deep model sizes with an application to speech enhancement

机译:一种转移学习和渐进式堆叠方法,可通过将其应用于语音增强来减小深模型的大小

获取原文

摘要

Leveraging upon transfer learning, we distill the knowledge in a conventional wide and deep neural network (DNN) into a narrower yet deeper model with fewer parameters and comparable system performance for speech enhancement. We present three transfer-learning solutions to accomplish our goal. First, the knowledge embedded in the form of the output values of a high-performance DNN is used to guide the training of a smaller DNN model in sequential transfer learning. In the second multi-task transfer learning solution, the smaller DNN is trained to learn the output value of the larger DNN, and the speech enhancement task in parallel. Finally, a progressive stacking transfer learning is accomplished through multi-task learning, and DNN stacking. Our experimental evidences demonstrate 5 times parameter reduction while maintaining similar enhancement performance with the proposed framework.
机译:利用转移学习,我们将传统的广而深的神经网络(DNN)中的知识提取到了一个更窄但更深的模型中,该模型具有较少的参数和可比的语音增强系统性能。我们提出了三种转移学习解决方案,以实现我们的目标。首先,以高性能DNN输出值形式嵌入的知识用于指导在顺序转移学习中训练较小的DNN模型。在第二种多任务传输学习解决方案中,训练较小的DNN以学习较大的DNN的输出值,并并行进行语音增强任务。最后,通过多任务学习和DNN堆叠完成渐进式堆叠转移学习。我们的实验证据表明,与建议的框架相比,参数减少了5倍,同时保持了类似的增强性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号