首页> 中文期刊> 《中国科学》 >A novel cross-modal hashing algorithm based on multimodal deep learning

A novel cross-modal hashing algorithm based on multimodal deep learning

         

摘要

With the growing popularity of multimodal data on the Web, cross-modal retrieval on large-scale multimedia databases has become an important research topic. Cross-modal retrieval methods based on hashing assume that there is a latent space shared by multimodal features. To model the relationship among heterogeneous data, most existing methods embed the data into a joint abstraction space by linear projections. However,these approaches are sensitive to noise in the data and are unable to make use of unlabeled data and multimodal data with missing values in real-world applications. To address these challenges, we proposed a novel multimodal deep-learning-based hash(MDLH) algorithm. In particular, MDLH uses a deep neural network to encode heterogeneous features into a compact common representation and learns the hash functions based on the common representation. The parameters of the whole model are fine-tuned in a supervised training stage.Experiments on two standard datasets show that the method achieves more effective results than other methods in cross-modal retrieval.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号