...
首页> 外文期刊>BRAIN. Broad Research in Artificial Intelligence and Neurosciences >Classification of Segmented Phonocardiograms by Convolutional Neural Networks
【24h】

Classification of Segmented Phonocardiograms by Convolutional Neural Networks

机译:卷积神经网络对心电图心电图的分类

获取原文

摘要

One of the first causes of human deaths in recent years in our world is heart diseases or cardiovascular diseases. Phonocardiograms (PCG) and electrocardiograms (ECG) are usually used for the detection of heart diseases. Studies on cardiac signals focus especially on the classification of heart sounds. Naturally, researches generally try to increase accuracy of classification. For this purpose, many studies use for the segmentation of heart sounds into S1 and S2 segments by methods such as Shannon energy, discreet wavelet transform and Hilbert transform. In this study, two different heart sounds data in the PhysioNet Atraining data set such as normal, and abnormal are classified with convolutional neural networks. For this purpose, the S1 and S2 parts of the heart sounds were segmented by the resampled energy method. The images of Phonocardiograms which were obtained from S1 and S2 parts in the heart sounds were used for classification. The resized small images of phonocardiogram were classified by convolutional neural networks. The obtained results were compared with the results from previous studies. The classification with CNN has performance as classification accuracy of 97.21%, sensitivity of 94.78%, and specificity of 99.65%. According to this, CNN classification with segmented S1-S2 sounds showed better results than the results of previous studies. In studies carried out, it has been seen that segmentation and convolutional neural networks increases the accuracy of classification and contributes to the classification studies efficiently.
机译:近年来,人类死亡的首要原因之一是心脏病或心血管疾病。心电图(PCG)和心电图(ECG)通常用于检测心脏病。对心脏信号的研究尤其侧重于心音的分类。自然地,研究通常试图提高分类的准确性。为此,许多研究通过诸如香农能量,离散小波变换和希尔伯特变换的方法将心音分割为S1和S2段。在这项研究中,使用卷积神经网络对PhysioNet Atraining数据集中的两个不同的心音数据(例如正常和异常)进行了分类。为此,通过重采样能量法对心音的S1和S2部分进行了分割。从心音的S1和S2部分获得的心电图图像用于分类。通过卷积神经网络对心电图调整大小的小图像进行分类。将获得的结果与以前的研究结果进行比较。 CNN分类具有分类准确度为97.21%,灵敏度为94.78%和特异性为99.65%的性能。据此,具有分段S1-S2声音的CNN分类显示出比以前的研究更好的结果。在进行的研究中,已经看到分段和卷积神经网络提高了分类的准确性,并有效地促进了分类研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号