首页> 外文期刊>Journal of Medical Imaging and Health Informatics >Heart Sound Classification Using Multi Modal Data Representation and Deep Learning
【24h】

Heart Sound Classification Using Multi Modal Data Representation and Deep Learning

机译:使用多模态数据表示和深度学习的心脏声音分类

获取原文
获取原文并翻译 | 示例
           

摘要

Heart anomalies are an important class of medical conditions from personal, public health and social perspectives and hence accurate and timely diagnoses are important. Heartbeat features two well known amplitude peaks termed S1 and S2. Some sound classification models rely on segmented sound intervals referenced to the locations of detected S1 and S2 peaks, which are often missing due to physiological causes and/or artifacts from sound sampling process. The constituent and combined models we propose are free from segmentation, which consequently is more robust and meritful from reliability aspects. Intuitive phonocardiogram representation with relatively simple deep learning architecture was found to be effective for classifying normal and abnormal heart sounds. A frequency spectrum based deep learning network also produced competitive classification results. When the classification models were merged in one via SVM, performance was seen to improve further. The SVM classification model, comprised of two time domain submodels and a frequency domain submodel, produced 0.9175 sensitivity, 0.8886 specificity and 0.9012 accuracy.
机译:心脏异常是个人,公共卫生和社会观点的重要阶级,因此准确,及时诊断都很重要。心跳具有称为S1和S2的两个已知的幅度峰值。一些声音分类模型依赖于对检测到的S1和S2峰的位置的分段声间隔,这通常由于来自声音采样过程的生理原因和/或伪像而常缺失。我们提出的成分和组合模型免于分割,因此从可靠性方面更加强大和不利。发现具有相对简单的深度学习架构的直观的音乐造影表示对分类正常和异常心音有效。基于频谱的深度学习网络也产生了竞争性的分类结果。当分类模型通过SVM合并一次时,可以看到性能进一步改进。 SVM分类模型由两个时域子模型和频域子模型组成,产生0.9175灵敏度,0.8886个特异性和0.9012精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号