首页> 外文会议>IEEE International Conference on Image Processing >You Only Need The Image: Unsupervised Few-Shot Semantic Segmentation With Co-Guidance Network
【24h】

You Only Need The Image: Unsupervised Few-Shot Semantic Segmentation With Co-Guidance Network

机译:您只需要图像:使用联合指导网络进行无监督的少量语义分割

获取原文

摘要

Few-shot semantic segmentation has recently attracted attention for its ability to segment unseen-class images with only a few annotated support samples. Yet existing methods not only need to be trained with a large scale of pixel-level annotations on certain seen classes, but also require a few annotated support image-mask pairs for the guidance of segmentation on each unseen class. In this paper, we propose the Co-guidance Network (CGNet) for unsupervised few-shot segmentation, which eliminates requirements of annotation on both seen and unseen classes. Specifically, CGNet segments unseen-class images with only unlabeled support images by the newly designed co-guidance mechanism. Moreover, CGNet is trained on seen classes by a novel co-existence recognition loss, which further removes the need of pixel-level annotations. Extensive experiments on the PASCAL $-5^{i}$ dataset show that the unsupervised CGNet performs comparably with the state-of-the-art fully-supervised few-shot methods, while largely alleviating annotation requirement.
机译:很少有语义分割最近因其仅用几个带注释的支持样本就可以分割看不见的图像的能力而引起了人们的关注。然而,现有方法不仅需要在某些可见类别上使用大规模的像素级注释进行训练,而且还需要一些带注释的支持图像蒙版对,以指导在每个看不见类别上进行分割。在本文中,我们提出了用于无监督的几次镜头分割的共同指导网络(CGNet),从而消除了对可见和不可见类的注释要求。具体而言,CGNet通过新设计的共同指导机制仅将未标记的支持图像分割为看不见的图像。此外,CGNet通过一种新颖的共存识别损失在可见的类上进行训练,从而进一步消除了像素级注释的需求。在PASCAL $ -5 ^ {i} $数据集上进行的大量实验表明,无监督的CGNet与最新的全监督的几次射击方法具有相当的性能,同时大大减轻了注释的要求。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号