首页> 外文期刊>Journal of visual communication & image representation >Joint feature selection and graph regularization for modality-dependent cross-modal retrieval
【24h】

Joint feature selection and graph regularization for modality-dependent cross-modal retrieval

机译:模态相关跨模态检索的联合特征选择和图正则化

获取原文
获取原文并翻译 | 示例

摘要

Most existing cross-modal retrieval methods ignore the discriminative semantics embedded in multi-modal data and the unique characteristics of different sub-retrieval tasks. To address the problem, we propose a novel approach in this paper, which is named Joint Feature selection and Graph regularization for Modality-dependent cross-modal retrieval (JFGM). The key idea of JFGM is learning modality-dependent subspaces for different sub-retrieval tasks while simultaneously preserving the semantic consistency of multi-modal data. Specifically, besides to the shared subspace learning between different modalities, a linear regression term is introduced to further correlate the discovered modality-dependent subspace with the explicit semantic space. Furthermore, a multi-model graph regularization term is formulated to preserve the inter-modality and intra-modality semantic consistency. In order to avoid over-fitting problems and select the discriminative features, l(2,1)-norm is imposed on the projection matrices. Experimental results on several publicly available datasets demonstrate the superiority of the proposed method compared with several state-of-the-art approaches.
机译:大多数现有的跨模式检索方法都忽略了嵌入在多模式数据中的判别语义以及不同子检索任务的独特特征。为了解决该问题,我们在本文中提出了一种新颖的方法,称为基于模式的交叉模式检索(JFGM)的联合特征选择和图正则化。 JFGM的关键思想是为不同的子检索任务学习依赖于模态的子空间,同时保留多模态数据的语义一致性。具体而言,除了在不同模态之间共享子空间学习之外,还引入了线性回归项,以进一步将发现的依赖于模态的子空间与显式语义空间相关联。此外,制定了一种多模型图正则化术语以保留模态间和模态内语义一致性。为了避免过度拟合问题并选择判别特征,将l(2,1)-范数强加于投影矩阵。与几个最新方法相比,在几个可公开获得的数据集上的实验结果证明了该方法的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号