首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition Workshops >Generative Adversarial Learning for Reducing Manual Annotation in Semantic Segmentation on Large Scale Miscroscopy Images: Automated Vessel Segmentation in Retinal Fundus Image as Test Case
【24h】

Generative Adversarial Learning for Reducing Manual Annotation in Semantic Segmentation on Large Scale Miscroscopy Images: Automated Vessel Segmentation in Retinal Fundus Image as Test Case

机译:生成对抗性学习,以减少大规模显微镜检查图像上的语义分割中的手动注释:视网膜眼底图像中的自动血管分割作为测试用例

获取原文

摘要

Convolutional Neural Network(CNN) based semantic segmentation require extensive pixel level manual annotation which is daunting for large microscopic images. The paper is aimed towards mitigating this labeling effort by leveraging the recent concept of generative adversarial network(GAN) wherein a generator maps latent noise space to realistic images while a discriminator differentiates between samples drawn from database and generator. We extend this concept to a multi task learning wherein a discriminator-classifier network differentiates between fake/real examples and also assigns correct class labels. Though our concept is generic, we applied it for the challenging task of vessel segmentation in fundus images. We show that proposed method is more data efficient than a CNN. Specifically, with 150K, 30K and 15K training examples, proposed method achieves mean AUC of 0.962, 0.945 and 0.931 respectively, whereas the simple CNN achieves AUC of 0.960, 0.921 and 0.916 respectively.
机译:基于卷积神经网络(CNN)的语义分割需要大量的像素级手动注释,这对于大型显微图像而言是艰巨的。本文旨在通过利用生成对抗网络(GAN)的最新概念来减轻这种标记工作,在生成对抗网络中,生成器将潜在噪声空间映射到现实图像,而鉴别器则可以区分从数据库和生成器中提取的样本。我们将此概念扩展到多任务学习,其中鉴别器-分类器网络在假/真实示例之间进行区分,并且还分配正确的类标签。尽管我们的概念是通用的,但我们将其应用于眼底图像中血管分割的艰巨任务。我们表明,提出的方法比CNN的数据效率更高。具体来说,以150K,30K和15K训练示例为例,所提出的方法分别实现了0.962、0.945和0.931的平均AUC,而简单的CNN分别实现了0.960、0.921和0.916的AUC。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号