首页> 中文期刊>声学学报:英文版 >Improvement of joint optimization of masks and deep recurrent neural networks for monaural speech separation using optimized activation functions

Improvement of joint optimization of masks and deep recurrent neural networks for monaural speech separation using optimized activation functions

     

摘要

Single channel speech separation was a challenging task for speech separation community for last three decades.It is now possible to separate speeches using deep neural networks(DNN)and deep recurrent neural networks(DRNN)due to deep learning.Researchers are now trying to improve different models of DNN and DRNN for monaural speech separation.In this paper,we have tried to improve existing DRNN and DNN based model for speech separation by using optimized activation functions.Instead of using rectified linear unit(RELU),we have implemented leaky RELU,exponential linear unit,exponential function,inverse square root linear unit and inverse cubic root linear unit(ICRLU)as activation functions.ICRLU and exponential function are new activation functions proposed in this research work.These activation functions have overcome the dying RELU problem.They have achieved better separation results in comparison with RELU function and they have also reduced the computational cost of DNN and DRNN based monaural speech separation.

著录项

相似文献

  • 中文文献
  • 外文文献
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号