首页> 外文期刊>Computational Imaging, IEEE Transactions on >Multimodal Image Super-Resolution via Joint Sparse Representations Induced by Coupled Dictionaries
【24h】

Multimodal Image Super-Resolution via Joint Sparse Representations Induced by Coupled Dictionaries

机译:通过耦合词典引起的关节稀疏表示的多模式图像超分辨率

获取原文
获取原文并翻译 | 示例

摘要

Real-world data processing problems often involve various image modalities associated with a certain scene, including RGB images, infrared images, or multispectral images. The fact that different image modalities often share certain attributes, such as edges, textures, and other structure primitives, represents an opportunity to enhance various image processing tasks. This paper proposes a new approach to construct a high-resolution (HR) version of a low-resolution (LR) image, given another HR image modality as guidance, based on joint sparse representations induced by coupled dictionaries. The proposed approach captures complex dependency correlations, including similarities and disparities, between different image modalities in a learned sparse feature domain in lieu of the original image domain. It consists of two phases: coupled dictionary learning phase and coupled superresolution phase. The learning phase learns a set of dictionaries from the training dataset to couple different image modalities together in the sparse feature domain. In turn, the super-resolution phase leverages such dictionaries to construct an HR version of the LR target image with another related image modality for guidance. In the advanced version of our approach, multistage strategy and neighbourhood regression concept are introduced to further improve the model capacity and performance. Extensive guided image super-resolution experiments on real multimodal images demonstrate that the proposed approach admits distinctive advantages with respect to the state-of-the-art approaches, for example, overcoming the texture copying artifacts commonly resulting from inconsistency between the guidance and target images. Of particular relevance, the proposed model demonstrates much better robustness than competing deep models in a range of noisy scenarios.
机译:现实世界的数据处理问题通常涉及与某个场景相关联的各种图像模型,包括RGB图像,红外图像或多谱图像。不同的图像方式通常共享某些属性,例如边缘,纹理和其他结构基元,代表了增强各种图像处理任务的机会。本文提出了一种构建低分辨率(LR)图像的高分辨率(HR)版本的新方法,基于耦合词典引起的关节稀疏表示,给定另一个HR图像模态作为指导。所提出的方法捕​​获复杂的依赖关系,包括相似性和差异,在学习稀疏特征域中的不同图像模式之间代替原始图像域。它由两个阶段组成:耦合字典学习阶段和耦合超级化阶段。学习阶段从训练数据集中学习一组词典,以将不同的图像模态耦合在稀疏特征域中。反过来,超分辨率阶段利用这种词典来构造LR目标图像的HR版本,其与另一个相关的图像模态进行指导。在我们的方法的先进版本中,引入了多级策略和邻里回归概念,以进一步提高模型能力和性能。广泛的引导图像超分辨率在真实多模式图像上证明了所提出的方法相对于最先进的方法承认了独特的优点,例如,克服了引导和目标图像之间的不一致产生的纹理复制伪像。 。特别是特定的相关性,所提出的模型比在一系列嘈杂的情景中竞争深层模型的更好的鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号