首页> 外文期刊>Computational intelligence and neuroscience >Deep Unsupervised Hashing for Large-Scale Cross-Modal Retrieval Using Knowledge Distillation Model
【24h】

Deep Unsupervised Hashing for Large-Scale Cross-Modal Retrieval Using Knowledge Distillation Model

机译:使用知识蒸馏模型进行大规模交叉模态检索的深度无监督散列

获取原文
           

摘要

Cross-modal hashing encodes heterogeneous multimedia data into compact binary code to achieve fast and flexible retrieval across different modalities. Due to its low storage cost and high retrieval efficiency, it has received widespread attention. Supervised deep hashing significantly improves search performance and usually yields more accurate results, but requires a lot of manual annotation of the data. In contrast, unsupervised deep hashing is difficult to achieve satisfactory performance due to the lack of reliable supervisory information. To solve this problem, inspired by knowledge distillation, we propose a novel unsupervised knowledge distillation cross-modal hashing method based on semantic alignment (SAKDH), which can reconstruct the similarity matrix using the hidden correlation information of the pretrained unsupervised teacher model, and the reconstructed similarity matrix can be used to guide the supervised student model. Specifically, firstly, the teacher model adopted an unsupervised semantic alignment hashing method, which can construct a modal fusion similarity matrix. Secondly, under the supervision of teacher model distillation information, the student model can generate more discriminative hash codes. Experimental results on two extensive benchmark datasets (MIRFLICKR-25K and NUS-WIDE) show that compared to several representative unsupervised cross-modal hashing methods, the mean average precision (MAP) of our proposed method has achieved a significant improvement. It fully reflects its effectiveness in large-scale cross-modal data retrieval.
机译:跨模型散列将异构多媒体数据编码为紧凑的二进制代码,以实现跨不同模式的快速灵活的检索。由于其储存成本低,检索效率高,因此受到广泛的关注。监督深度散列显着提高了搜索性能,通常会产生更准确的结果,但需要大量的手动注释数据。相比之下,由于缺乏可靠的监督信息,难以实现令人满意的性能。为了解决这个问题,通过知识蒸馏的启发,我们提出了一种基于语义对准(SAKDH)的新型无监督知识蒸馏跨模型散列方法,其可以使用预先预测的教师模型的隐藏相关信息来重建相似性矩阵,以及重建的相似性矩阵可用于指导监督的学生模型。具体而言,首先,教师模型采用了一种无监督的语义对准散列方法,其可以构建模态融合相似度矩阵。其次,在教师模型蒸馏信息的监督下,学生模型可以产生更多辨别性的哈希代码。两种广泛的基准数据集(Mirflickr-25k和Nus-Wide)的实验结果表明,与几个代表性无监督的跨模型散列方法相比,我们所提出的方法的平均平均精度(地图)实现了显着的改进。它充分反映了大规模交叉模态数据检索的效力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号