...
首页> 外文期刊>International journal of adaptive control and signal processing >A learning framework of modified deep recurrent neural network for classification and recognition of voice mood
【24h】

A learning framework of modified deep recurrent neural network for classification and recognition of voice mood

机译:A learning framework of modified deep recurrent neural network for classification and recognition of voice mood

获取原文
获取原文并翻译 | 示例
           

摘要

SUMMARY Recognition of human emotions is a basic requirement in many real‐time applications. Detection of exact emotions through voice provides relevant information for various purposes. Several computational methods have been employed for the clear analysis of human emotions. Most of the previous approaches face complexities due to certain drawbacks like degraded signal quality, a requirement of high storage space, increased computational complexity, and deteriorated outcomes of classification accuracy. The proposed work was implemented to gather the accurate classification result of embedded emotions and minimize the computational complexities of MDDTRNN (modified deep duck and traveler recurrent neural network). The proposed work includes four steps: preprocessing, feature extraction, feature selection, and classification. In feature extraction, the spectral and frequency features are extracted using the adopting boosted MFCC (Mel frequency cepstral coefficients) method to improve training speed. In feature selection, the best features are selected using an algorithm of AAVOA (adaptive African vulture optimization algorithm). To provide optimal emotion results, the classification step is undertaken by the MDDTRNN technique. The proposed work shows better classification outcomes of emotions when compared to the existing approaches by holding the accuracy of (95.86%), precision as (93.79%), specificity as (94.28%), sensitivity as (92.89%) and the error rate is attained to be 5.266 in terms of IEMOCAP dataset. The accuracy result (96.27%), precision (94.83%), specificity (93.16%), sensitivity (94%) and the error rate is achieved to be 4.982 in terms of the EMODB dataset.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号