首页> 外文会议>International Joint Conference on Artificial Intelligence >Co-attention CNNs for Unsupervised Object Co-segmentation
【24h】

Co-attention CNNs for Unsupervised Object Co-segmentation

机译:无监督对象共分割的共同关注CNN

获取原文

摘要

Object co-segmentation aims to segment the common objects in images. This paper presents a CNN-based method that is unsupervised and end-to-end trainable to better solve this task. Our method is unsupervised in the sense that it does not require any training data in the form of object masks but merely a set of images jointly covering objects of a specific class. Our method comprises two collaborative CNN modules, a feature extractor and a co-attention map generator. The former module extracts the features of the estimated objects and backgrounds, and is derived based on the proposed co-attention loss, which minimizes inter-image object discrepancy while maximizing intraimage figure-ground separation. The latter module is learned to generate co-attention maps by which the estimated figure-ground segmentation can better fit the former module. Besides the co-attention loss, the mask loss is developed to retain the whole objects and remove noises. Experiments show that our method achieves superior results, even outperforming the state-of-the-art, supervised methods.
机译:对象共同分割旨在段图像中的共同对象。本文提出了一种基于CNN-方法是无监督和终端到终端的可训练,以更好地解决这一任务。我们的方法是在无人监督的意义,它不需要对象面具的形式,而只是一组图像共同覆盖特定类的对象的任何训练数据。我们的方法包括两个协作CNN模块,特征提取器和共注意图发生器。前者模块提取所估计的对象和背景的特征,并且基于所提出的共同关注的损失,同时最大限度地提高图象内图地面分离最小化的图像间差异对象导出。后者模块学会产生共同关注的地图,通过该估计数字地面分段能够更好地适应前者模块。除了共同关注的损失,面具损失开发保留整个对象和去噪。实验表明,我们的方法实现了优异的业绩,甚至超越了国家的最先进的,监督的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号