首页> 美国卫生研究院文献>Biomedical Optics Express >Detection of exudates in fundus photographs with imbalanced learning using conditional generative adversarial network
【2h】

Detection of exudates in fundus photographs with imbalanced learning using conditional generative adversarial network

机译:使用条件生成对抗网络检测学习不平衡的眼底照片中的渗出液

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Diabetic retinopathy (DR) is a leading cause of blindness worldwide. However, 90% of DR caused blindness can be prevented if diagnosed and intervened early. Retinal exudates can be observed at the early stage of DR and can be used as signs for early DR diagnosis. Deep convolutional neural networks (DCNNs) have been applied for exudate detection with promising results. However, there exist two main challenges when applying the DCNN based methods for exudate detection. One is the very limited number of labeled data available from medical experts, and another is the severely imbalanced distribution of data of different classes. First, there are many more images of normal eyes than those of eyes with exudates, particularly for screening datasets. Second, the number of normal pixels (non-exudates) is much greater than the number of abnormal pixels (exudates) in images containing exudates. To tackle the small sample set problem, an ensemble convolutional neural network (MU-net) based on a U-net structure is presented in this paper. To alleviate the imbalance data problem, the conditional generative adversarial network (cGAN) is adopted to generate label-preserving minority class data specifically to implement the data augmentation. The network was trained on one dataset (e_ophtha_EX) and tested on the other three public datasets (DiaReTDB1, HEI-MED and MESSIDOR). CGAN, as a data augmentation method, significantly improves network robustness and generalization properties, achieving F1-scores of 92.79%, 92.46%, 91.27%, and 94.34%, respectively, as measured at the lesion level. While without cGAN, the corresponding F1-scores were 92.66%, 91.41%, 90.72%, and 90.58%, respectively. When measured at the image level, with cGAN we achieved the accuracy of 95.45%, 92.13%, 88.76%, and 89.58%, compared with the values achieved without cGAN of 86.36%, 87.64%, 76.33%, and 86.42%, respectively.
机译:糖尿病性视网膜病(DR)是全球失明的主要原因。但是,如果尽早诊断和干预,可以预防90%的DR引起的失明。视网膜渗出液可在DR的早期阶段观察到,并可作为DR早期诊断的征兆。深度卷积神经网络(DCNN)已用于渗出液检测,并获得了可喜的结果。然而,当将基于DCNN的方法应用于渗出物检测时,存在两个主要挑战。一个是从医学专家那里获得的标记数据数量非常有限,另一个是不同类别的数据分布严重不平衡。首先,正常眼睛的图像比带有渗出液的眼睛的图像多,尤其是对于筛选数据集。其次,正常像素(非渗出液)的数量远大于包含渗出液的图像中异常像素(渗出液)的数量。为了解决小样本集问题,本文提出了一种基于U-net结构的集成卷积神经网络(MU-net)。为了缓解数据不平衡问题,采用条件生成对抗网络(cGAN)生成保留标签的少数类数据,专门用于数据增强。该网络在一个数据集(e_ophtha_EX)上进行了培训,并在其他三个公共数据集(DiaReTDB1,HEI-MED和MESSIDOR)上进行了测试。 CGAN作为一种数据增强方法,可以显着提高网络的鲁棒性和泛化性能,在病变级别进行测量的F1分数分别为92.79%,92.46%,91.27%和94.34%。在没有cGAN的情况下,相应的F1得分分别为92.66%,91.41%,90.72%和90.58%。在图像水平上进行测量时,与不使用cGAN时分别达到86.36%,87.64%,76.33%和86.42%相比,使用cGAN可以达到95.45%,92.13%,88.76%和89.58%的精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号