首页> 外文会议>European Conference on Computer Vision >Unsupervised Deep Metric Learning with Transformed Attention Consistency and Contrastive Clustering Loss
【24h】

Unsupervised Deep Metric Learning with Transformed Attention Consistency and Contrastive Clustering Loss

机译:随更调测的深度度量学习与转化的关注一致性和对比聚类损失

获取原文

摘要

Existing approaches for unsupervised metric learning focus on exploring self-supervision information within the input image itself. We observe that, when analyzing images, human eyes often compare images against each other instead of examining images individually. In addition, they often pay attention to certain keypoints, image regions, or objects which are discriminative between image classes but highly consistent within classes. Even if the image is being transformed, the attention pattern will be consistent. Motivated by this observation, we develop a new approach to unsupervised deep metric learning where the network is learned based on self-supervision information across images instead of within one single image. To characterize the consistent pattern of human attention during image comparisons, we introduce the idea of transformed attention consistency. It assumes that visually similar images, even undergoing different image transforms, should share the same consistent visual attention map. This consistency leads to a pairwise self-supervision loss, allowing us to learn a Siamese deep neural network to encode and compare images against their transformed or matched pairs. To further enhance the inter-class discriminative power of the feature generated by this network, we adapt the concept of triplet loss from supervised metric learning to our unsupervised case and introduce the contrastive clustering loss. Our extensive experimental results on benchmark datasets demonstrate that our proposed method outperforms current state-of-the-art methods for unsupervised metric learning by a large margin.
机译:未经监督的度量学习的现有方法,专注于探索输入图像本身内的自我监督信息。我们观察到,当分析图像时,人眼通常比较彼此的图像而不是单独检查图像。此外,它们通常会注意某些关键点,图像区域或对象,这些关键点,图像区域或对象是在图像类之间而是在类中高度一致的判别。即使图像正在转换,注意模式也会一致。通过这种观察,我们开发了一种新方法,可以在无监督的深度度量学习,基于图像跨图像的自我监督信息而不是在一个单个图像内学习。为了在图像比较期间表征人类注意的一致模式,我们介绍了转化的注意力的想法。它假设甚至正在接受不同的图像变换的视觉上类似的图像应该共享相同的一致视觉映射。这种一致性导致成对自我监督损失,允许我们学习暹罗深度神经网络来编码和比较其变换或匹配对的图像。为了进一步增强该网络产生的特征的阶级歧视力,我们将三联损失的概念从监督公制学习到我们无监督的情况,并引入对比聚类损失。我们对基准数据集的广泛实验结果表明,我们所提出的方法优于当前最先进的方法,以便通过大边距进行无监督的度量学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号