...
首页> 外文期刊>IETE Technical Review >Contextual Information Driven Multi-modal Medical Image Fusion
【24h】

Contextual Information Driven Multi-modal Medical Image Fusion

机译:上下文信息驱动的多模式医学图像融合

获取原文
获取原文并翻译 | 示例
           

摘要

To utilize context correlation between coefficients in contourlet domain, a novel multi-modal medical image fusion method based on contextual information is proposed. First, the context information of contourlet coefficients are calculated to capture the strong dependencies of coefficients. Second, hidden Markov model based on context information for the contourlet coefficients (C-CHMM) is constructed to describe the characteristics of medical image in a small number of parameters. Further, low-pass subband coefficients are combined by magnitude maximum rule and high-pass subband coefficients are fused by a new C-CHMM driven multistrategy fusion rule. Finally, the fused image is obtained by inverse contourlet transform. Experimental results demonstrate that the proposed fusion method can effectively suppress the color distortion and provide a better fusion quality compared with some typical fusion methods.
机译:为了利用Contourlet域系数之间的上下文相关性,提出了一种基于上下文信息的多模态医学图像融合方法。首先,计算轮廓波系数的上下文信息以捕获系数的强相关性。其次,构造基于轮廓信息的上下文信息的隐马尔可夫模型(C-CHMM),以少量参数描述医学图像的特征。此外,低通子带系数通过幅度最大值规则组合,高通子带系数通过新的C-CHMM驱动的多策略融合规则融合。最后,通过逆轮廓波变换获得融合图像。实验结果表明,与某些典型的融合方法相比,该融合方法可以有效地抑制色彩失真,并提供更好的融合质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号