首页> 外文会议>IEEE International Conference on Signal and Image Processing Applications >Cross-Modal Image Matching Based on Coupled Convolutional Sparse Coding and Feature Space Learning
【24h】

Cross-Modal Image Matching Based on Coupled Convolutional Sparse Coding and Feature Space Learning

机译:基于耦合卷积稀疏编码和特征空间学习的跨模型图像匹配

获取原文

摘要

Coupled sparse representation (CSR) has achieved great success in cross-modal image matching in recent years. This model not only observes a common feature space for associating cross-domain image data for recognition purpose, but also improves the accuracy of cross-modal image matching. However, it applied the locally based manner in coupled sparse coding stage, so it is difficult to extract deeper image features, which is very important to improve the accuracy of cross-modal image matching. In order to address this problem, this paper proposes an iterative cross-modal image matching algorithm based on the coupled convolutional sparse coding and feature space learning (CCSCL). In contrast to the existing CSR-base cross-modal image matching methods, this model provides a global and flexible way to overcome some limitations of CSR. By making use of convolutional sparse coding and operating on the whole image, CCSCL can grasp the correlation between the pixels well and get more accurate modal feature map. In addiction, an cross-iterative training algorithm based on common feature space and correlation analysis of coupled convolution sparse coefficient is derived to efficiently solve the optimization problem. Experimental results show that the proposed model can effectively apply to cross-domain image matching and attain 98% matching accuracy.
机译:耦合稀疏表示(CSR)近年来达到了跨模型图像匹配的巨大成功。该模型不仅观察到用于将跨域图像数据相关联用于识别目的的常见特征空间,而且还提高了跨模型图像匹配的准确性。然而,它在耦合的稀疏编码阶段中施加本地的方式,因此难以提取更深的图像特征,这对于提高跨模型图像匹配的准确性非常重要。为了解决这个问题,本文提出了一种基于耦合卷积稀疏编码和特征空间学习(CCSCL)的迭代跨模型图像匹配算法。与现有的CSR基础跨模式匹配方法相比,该模型提供了一种全局和灵活的方法来克服CSR的一些限制。通过利用卷积稀疏编码和在整个图像上操作,CCSCL可以掌握像素之间的相关性并获得更准确的模态特征图。在成瘾中,导出了一种基于耦合卷积稀疏系数的共同特征空间和相关分析的交叉迭代训练算法,以有效地解决优化问题。实验结果表明,所提出的模型可以有效地应用于跨域图像匹配并获得98%的匹配精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号