...
首页> 外文期刊>Journal of digital imaging: the official journal of the Society for Computer Applications in Radiology >Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs
【24h】

Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs

机译:后期和后胸胸部射线照相自动分类的深度学习方法

获取原文
获取原文并翻译 | 示例

摘要

Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN's performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.
机译:确保正确的Xcoxporph View标签对于机器学习算法的开发和质量控制,对多种设施获得的研究非常重要。本研究的目的是发展和测试深度卷积神经网络(DCNN)的性能,用于将前胸部射线照片(CXRS)的自动分类为前胸剂(AP)或后腹(PA)视图。从NIH Chestx-ray14数据库获得112,120个CXRS,在成人(106,179(95%))和儿科(5941(5%))和儿科(5941(5%))组成的患者中,包括44,810(40%)AP和67,310(60% )Pa视图。 CXRS用于培训,验证,并测试Reset-18 DCNN,以进行射线照片的分类为前胸内和后剖视图。仅使用儿科CXRS(2885(49%)AP和3056(51%)PA)以相同的方式开发第二DCNN。接收器操作特征(ROC)曲线在曲线(AUC)下的曲线和标准诊断措施的曲线用于评估DCNN在测试数据集上的性能。在整个CXR数据集和儿科CXR数据集上培训的DCNNS分别具有1.0和0.997的AUC,分别为99.6%和98%,以区分AP和PA CXR。对于在整个数据集上培训的DCNN,敏感性和特异性分别为99.6%和99.5%,对于在儿科数据集上培训的DCNN的敏感性和特异性,98%。观察到两种算法之间的性能差异在统计学上没有统计学意义(P = 0.17)。我们的DCNNS对额外CXRS的AP / PA方向进行了高精度,只有在训练数据集减少95%时的性能略有降低。 DCNN的CXR的快速分类可以促进用于机器学习和质量保证目的的大图像数据集的注释。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号