...
首页> 外文期刊>Biomedical signal processing and control >Robust S1 and S2 heart sound recognition based on spectral restoration and multi-style training
【24h】

Robust S1 and S2 heart sound recognition based on spectral restoration and multi-style training

机译:基于频谱恢复和多样式训练的强大S1和S2心音识别

获取原文
获取原文并翻译 | 示例

摘要

Recently, we have proposed a deep learning based heart sound recognition framework, which can provide high recognition performance under clean testing conditions. However, the recognition performance can notably degrade when noise is present in the recording environments. This study investigates a spectral restoration algorithm to reduce noise components from heart sound signals to achieve robust S1 and S2 recognition in real-world scenarios. In addition to the spectral restoration algorithm, a multi-style training strategy is adopted to train a robust acoustic model, by incorporating acoustic observations from both original and restored heart sound signals. We term the proposed method as SRMT (spectral restoration and multi-style training). The experimental procedure in this study is described as follows: First, an electronic stethoscope was used to record actual heart sounds, and the noisy signals were artificially generated at different signal-to-noise-ratios (SNRs). Second, an acoustic model based on deep neural networks (DNNs) was trained using original heart sounds and heart sounds processed through spectral restoration. Third, the performance of the trained model was evaluated using the following metrics: accuracy, precision, recall, specificity, and F-measure. The results confirm the effectiveness of the proposed method for recognizing heart sounds in noisy environments. The recognition results of an acoustic model trained on SRMT outperform that trained on clean data with a 2.36% average accuracy improvement (from 85.44% and 87.80%), over clean, 20dB, 15dB, 10dB, 5dB, and 0dB SNR conditions; the improvements are more notable in low SNR conditions: the average accuracy improvement is 3.87% (from 82.83% to 86.70%) in the 0dB SNR condition. (C) 2018 Published by Elsevier Ltd.
机译:最近,我们提出了一种基于深度学习的心音识别框架,该框架可以在干净的测试条件下提供较高的识别性能。但是,当记录环境中存在噪声时,识别性能会明显下降。这项研究研究了一种频谱还原算法,可以减少心音信号中的噪声分量,从而在实际场景中实现可靠的S1和S2识别。除了频谱恢复算法外,还采用了多种样式的训练策略来训练健壮的声学模型,方法是结合原始和恢复的心音信号的声学观测结果。我们将所提出的方法称为SRMT(光谱恢复和多样式训练)。这项研究中的实验过程描述如下:首先,使用电子听诊器记录实际的心音,并以不同的信噪比(SNR)人工生成噪声信号。其次,使用原始的心音和通过频谱恢复处理的心音训练了基于深度神经网络(DNN)的声学模型。第三,使用以下指标评估训练模型的性能:准确性,准确性,召回率,特异性和F量度。结果证实了所提出的方法在嘈杂的环境中识别心音的有效性。在纯净,20dB,15dB,10dB,5dB和0dB SNR条件下,在SRMT上训练的声学模型的识别结果优于在纯净数据上训练的平均准确率(从85.44%和87.80%提高到2.36%);在低SNR条件下,改进更为显着:在0dB SNR条件下,平均准确度提高了3.87%(从82.83%到86.70%)。 (C)2018由Elsevier Ltd.发布

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号