首页> 外文期刊>Journal of Medical Imaging and Health Informatics >Phonocardiogram Classification Using Deep Convolutional Neural Networks with Majority Vote Strategy
【24h】

Phonocardiogram Classification Using Deep Convolutional Neural Networks with Majority Vote Strategy

机译:使用深度卷积神经网络与多数投票策略进行分类

获取原文
获取原文并翻译 | 示例
           

摘要

Most current automated phonocardiogram (PCG) classification methods are relied on PCG segmentation. It is universal to make use of the segmented PCG signals and then extract efficiency features for computer-aided auscultation or heart sound classification. However, the accurate segmentation of the fundamental heart sounds depends greatly on the quality of the heart sound signals. In addition these methods that heavily relied on segmentation algorithm considerably increase the computational burden. To solve above two issues, we have developed a novel approach to classify normal and abnormal cardiac diseases with un-segmented PCG signals. A deep Convolutional Neural Networks (DCNNs) method is proposed for recognizing normal and abnormal cardiac diseases. In the proposed method, one-dimensional heart sound signals are first converted into two-dimensional feature maps which have three channels and each of them represents Mel-frequency spectral coefficients (MFSC) features including static, delta and delta-delta. These artificial images are then fed to the proposed DCNNs to train and evaluate normal and abnormal heart sound signals. We combined the method of majority vote strategy to finally obtain the category of PCG signals. Sensitivity (Se), Specificity (Sp) and Mean accuracy (MAcc) are used as the evaluation metrics. Results: Experiments demonstrated that our approach achieved a significant improvement, with the high Se, Sp, and MAcc of 92.73%, 96.90% and 94.81% respectively. The proposed method improves the MAcc by 5.63% compared with the best result in the CinC Challenge 2016. In addition, it has better robustness performance when applying for the long heart sounds. The proposed DCNNs-based method can achieve the best accuracy performance on recognizing normal and abnormal heart sounds without the preprocessing of segmental algorithm. It significantly improves the classification performance compared with the current state-of-art algorithm.
机译:大多数当前的自动化音盲局(PCG)分类方法依赖于PCG分段。它是使用分段的PCG信号,然后提取计算机辅助听诊或心声分类的效率特征。然而,基本心脏声音的准确分割取决于心脏声音信号的质量。此外,这些方法大量依赖于分割算法显着提高了计算负担。为了解决上述两个问题,我们开发了一种新的方法来分类具有未分割的PCG信号的正常和异常心脏病。提出了一种深度卷积神经网络(DCNN)方法,用于识别正常和异常的心脏病。在所提出的方法中,首先将一维心声音信号转换为具有三个通道的二维特征图,并且它们中的每一个代表包括静态,三角形和Δ-delta的熔体频谱系数(MFSC)特征。然后将这些人工图像馈送到所提出的DCNN,以训练和评估正常和异常的心声信号。我们组合了多数票策略的方法,最终获得PCG信号的类别。灵敏度(SE),特异性(SP)和平均准确度(MACC)用作评估度量。结果:实验表明,我们的方法分别实现了92.73%,96.90%和94.81%的高硒,SP和MAC的显着改善。拟议的方法将MACC提高了5.63%,而2016年CINC挑战的最佳结果。此外,在申请长心声音时,它具有更好的稳健性能。所提出的基于DCNNS的方法可以在没有分段算法的预处理的情况下识别正常和异常心脏声音的最佳精度性能。与当前最先进的算法相比,它显着提高了分类性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号