首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Deep Semantic-Preserving Ordinal Hashing for Cross-Modal Similarity Search
【24h】

Deep Semantic-Preserving Ordinal Hashing for Cross-Modal Similarity Search

机译:用于跨模态相似性搜索的深度语义保留序数散列

获取原文
获取原文并翻译 | 示例

摘要

Cross-modal hashing has attracted increasing research attention due to its efficiency for large-scale multimedia retrieval. With simultaneous feature representation and hash function learning, deep cross-modal hashing (DCMH) methods have shown superior performance. However, most existing methods on DCMH adopt binary quantization functions (e.g., sign(center dot)) to generate hash codes, which limit the retrieval performance since binary quantization functions are sensitive to the variations of numeric values. Toward this end, we propose a novel end-to-end ranking-based hashing framework, in this paper, termed as deep semantic-preserving ordinal hashing (DSPOH), to learn hash functions with deep neural networks by exploring the ranking structure of feature dimensions. In DSPOH, the ordinal representation, which encodes the relative rank ordering of feature dimensions, is explored to generate hash codes. Such ordinal embedding benefits from the numeric stability of rank correlation measures. To make the hash codes discriminative, the ordinal representation is expected to well predict the class labels so that the ranking-based hash function learning is optimally compatible with the label predicting. Meanwhile, the intermodality similarity is preserved to guarantee that the hash codes of different modalities are consistent. Importantly, DSPOH can be effectively integrated with different types of network architectures, which demonstrates the flexibility and scalability of our proposed hashing framework. Extensive experiments on three widely used multimodal data sets show that DSPOH outperforms state of the art for cross-modal retrieval tasks.
机译:由于其对大型多媒体检索的效率,跨模态散列引起了越来越多的研究。通过同步特征表示和哈希函数学习,深跨模态散列(DCMH)方法显示出优越的性能。但是,DCMH上的大多数现有方法采用二进制量化函数(例如,符号(中心点))来生成散列代码,这限制了检索性能,因为二进制量化函数对数值的变化敏感。为此,我们提出了一种新的端到端排名排名的散列框架,本文称为深度语义保留序数散列(DSPOH),通过探索特征的排名结构来学习具有深度神经网络的哈希函数方面。在DSPOH中,探讨了编码特征尺寸的相对等级排序的序数表示,以生成散列代码。这种序数嵌入依次稳定性的益处。为了使哈希代码歧视,预计序数表示将很好地预测类标签,以便对基于排名的哈希函数学习与标签预测最佳地兼容。同时,保留了相互表地相同以保证不同方式的散列码是一致的。重要的是,DSPOH可以用不同类型的网络架构有效地集成,这证明了我们所提出的散列框架的灵活性和可扩展性。在三种广泛使用的多媒体数据集上进行广泛的实验表明,DSPOH优于跨模型检索任务的领域的状态。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号