首页> 外文会议>IEEE International Conference on Multimedia and Expo >Enhanced isomorphic semantic representation for cross-media retrieval
【24h】

Enhanced isomorphic semantic representation for cross-media retrieval

机译:用于跨媒体检索的增强的同构语义表示

获取原文

摘要

Nowadays cross-media retrieval is an useful technology that helps people find expected information from the huge amount of multimodal data more efficiently. A common cross-media retrieval framework is first to map features of different modalities into an isomorphic semantic space so that the similarity between heterogeneous data can be measured. For most of semantic space based methods, the mapping mechanism from original to semantic space of each modality is optimized independently, yet the more discriminative characteristic of a certain modality is not taken into account. In this paper, we propose a deep framework which introduces a latent embedding layer to learn joint parameters to obtain semantically meaningful representations of images and texts. Specifically, the discriminative characteristic embedded in the textual modality can be transferred to images through the latent embedding layer and joint parameters to enhance the consistency between semantic representations. Extensive experiments on the three popular publicly available datasets well demonstrate the superiority of the proposed method, which achieves the new state-of-the-arts.
机译:如今,跨媒体检索是一项有用的技术,可以帮助人们更有效地从海量的多模式数据中查找期望的信息。通用的跨媒体检索框架首先是将不同模态的特征映射到同构语义空间中,以便可以测量异构数据之间的相似性。对于大多数基于语义空间的方法,每个模态从原始空间到语义空间的映射机制都是独立优化的,但是没有考虑某种模态的更具区分性的特征。在本文中,我们提出了一个深层框架,该框架引入了潜在的嵌入层来学习联合参数以获得语义上有意义的图像和文本表示。具体而言,嵌入文本语态的区分特征可以通过潜在的嵌入层和联合参数传递给图像,以增强语义表示之间的一致性。在三个流行的公开数据集上进行的大量实验很好地证明了所提出方法的优越性,从而达到了最新的技术水平。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号