首页> 外文期刊>Information Sciences: An International Journal >Semantic Boosting Cross-Modal Hashing for efficient multimedia retrieval
【24h】

Semantic Boosting Cross-Modal Hashing for efficient multimedia retrieval

机译:语义促进跨模态散列,实现高效的多媒体检索

获取原文
获取原文并翻译 | 示例
           

摘要

Cross-modal hashing aims to embed data from different modalities into a common low-dimensional Hamming space, which serves as an important part in cross-modal retrieval. Although many linear projection methods were proposed to map cross-modal data into a common abstract space, the semantic similarity between cross-modal data was often ignored. To address this issue, we put forward a novel cross-modal hashing method named Semantic Boosting Cross-Modal Hashing (SBCMH). To preserve the semantic similarity, we first apply multi-class logistic regression to project heterogeneous data into a semantic space, respectively. To further narrow the semantic gap between different modalities, we then use a joint boosting framework to learn hash functions, and finally transform the mapped data representations into a measurable binary subspace. Comparative experiments on two public datasets demonstrate the effectiveness of the proposed SBCMH. (C) 2015 Elsevier Inc. All rights reserved.
机译:跨模态散列旨在将来自不同模态的数据嵌入到通用的低维汉明空间中,这是跨模态检索中的重要组成部分。尽管提出了许多线性投影方法来将交叉模式数据映射到公共抽象空间中,但是交叉模式数据之间的语义相似性经常被忽略。为了解决这个问题,我们提出了一种新颖的跨模式散列方法,称为语义增强跨模式散列(SBCMH)。为了保持语义相似性,我们首先应用多类逻辑回归将异构数据分别投影到语义空间中。为了进一步缩小不同模态之间的语义鸿沟,我们然后使用联合提升框架来学习哈希函数,最后将映射的数据表示形式转换为可测量的二进制子空间。在两个公共数据集上的比较实验证明了所提出的SBCMH的有效性。 (C)2015 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号