首页>
中文期刊>声学学报:英文版
>Improvement of joint optimization of masks and deep recurrent neural networks for monaural speech separation using optimized activation functions
Improvement of joint optimization of masks and deep recurrent neural networks for monaural speech separation using optimized activation functions
Single channel speech separation was a challenging task for speech separation community for last three decades.It is now possible to separate speeches using deep neural networks(DNN)and deep recurrent neural networks(DRNN)due to deep learning.Researchers are now trying to improve different models of DNN and DRNN for monaural speech separation.In this paper,we have tried to improve existing DRNN and DNN based model for speech separation by using optimized activation functions.Instead of using rectified linear unit(RELU),we have implemented leaky RELU,exponential linear unit,exponential function,inverse square root linear unit and inverse cubic root linear unit(ICRLU)as activation functions.ICRLU and exponential function are new activation functions proposed in this research work.These activation functions have overcome the dying RELU problem.They have achieved better separation results in comparison with RELU function and they have also reduced the computational cost of DNN and DRNN based monaural speech separation.
展开▼
机译:基于优化遗传神经网络的井下运输机械滚动轴承故障诊断(The Application of Optimizing the GENETIC NEURAL NETWORK to the Fault Diagnosis of Rolling Bearings of Transporting Machinery Underground)