首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval
【24h】

Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval

机译:自我监督的对抗式哈希网络,用于跨模态检索

获取原文

摘要

Thanks to the success of deep learning, cross-modal retrieval has made significant progress recently. However, there still remains a crucial bottleneck: how to bridge the modality gap to further enhance the retrieval accuracy. In this paper, we propose a self-supervised adversarial hashing (SSAH) approach, which lies among the early attempts to incorporate adversarial learning into cross-modal hashing in a self-supervised fashion. The primary contribution of this work is that two adversarial networks are leveraged to maximize the semantic correlation and consistency of the representations between different modalities. In addition, we harness a self-supervised semantic network to discover high-level semantic information in the form of multi-label annotations. Such information guides the feature learning process and preserves the modality relationships in both the common semantic space and the Hamming space. Extensive experiments carried out on three benchmark datasets validate that the proposed SSAH surpasses the state-of-the-art methods.
机译:得益于深度学习的成功,跨模式检索最近取得了重大进展。但是,仍然存在一个关键瓶颈:如何弥合模态差距以进一步提高检索准确性。在本文中,我们提出了一种自我监督的对抗式哈希(SSAH)方法,该方法属于将自我学习以自监督的方式纳入跨模式哈希的早期尝试之一。这项工作的主要贡献是利用了两个对抗网络来最大化语义关联和不同形式之间表示的一致性。此外,我们利用自我监督的语义网络以多标签注释的形式发现高级语义信息。这样的信息指导特征学习过程,并在公共语义空间和汉明空间中保留模态关系。在三个基准数据集上进行的广泛实验证明,所提出的SSAH超越了最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号