...
首页> 外文期刊>Multimedia Tools and Applications >Label guided correlation hashing for large-scale cross-modal retrieval
【24h】

Label guided correlation hashing for large-scale cross-modal retrieval

机译:标签引导相关散列用于大规模跨模态检索

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

With the explosive growth of multimedia data such as text and image, large-scale cross-modal retrieval has attracted more attention from vision community. But it still confronts the problems of the so-called "media gap" and search efficiency. Looking into the literature, we find that one leading type of existing cross-modal retrieval methods has been broadly investigated to alleviate the above problems by capturing the correlations across modalities as well as learning hashing codes. However, supervised label information is usually independently considered in the process of either generating hashing codes or learning hashing function. To this, we propose a label guided correlation cross-modal hashing method (LGCH), which investigates an alternative way to exploit label information for effective cross-modal retrieval from two aspects: 1) LGCH learns the discriminative common latent representation across modalities through joint generalized canonical correlation analysis (GCCA) and a linear classifier; 2) to simultaneously generate binary codes and hashing function, LGCH introduces an adaptive parameter to effectively fuse the common latent representation and the label guided representation for effective cross-modal retrieval. Moreover, each subproblem of LGCH has the elegant analytical solution. Experiments of cross-modal retrieval on three multi-media datasets show LGCH performs favorably against many well-established baselines.
机译:随着文本和图像等多媒体数据的爆炸性增长,大规模的跨模式检索已引起视觉界的更多关注。但是它仍然面临所谓的“媒体缺口”和搜索效率的问题。查阅文献,我们发现,已经广泛研究了一种领先的现有交叉模式检索方法,以通过捕获跨模态的相关性以及学习哈希码来缓解上述问题。但是,通常在生成哈希码或学习哈希函数的过程中独立考虑监督标签信息。为此,我们提出了一种标签引导相关交叉模态哈希方法(LGCH),该方法从两个方面研究了一种利用标签信息进行有效的交叉模态检索的替代方法:1)LGCH通过联合学习跨模态的判别性共同潜在表示。广义规范相关分析(GCCA)和线性分类器; 2)为了同时生成二进制代码和哈希函数,LGCH引入了自适应参数,以有效地融合常见的潜在表示形式和标签引导表示形式,以进行有效的跨模态检索。此外,LGCH的每个子问题都有优雅的分析解决方案。在三个多媒体数据集上进行的跨模式检索实验表明,LGCH在许多公认的基准上的表现令人满意。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号