首页> 外文期刊>EURASIP journal on image and video processing >An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors
【24h】

An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors

机译:一种基于YOUR-SIFT功能和PIFD描述符的稳健多模态视网膜图像配准的有效方法

获取原文
       

摘要

Existing algorithms based on scale invariant feature transform (SIFT) and Harris corners such as edge-driven dual-bootstrap iterative closest point and Harris-partial intensity invariant feature descriptor (PIIFD) respectivley have been shown to be robust in registering multimodal retinal images. However, they fail to register color retinal images with other modalities in the presence of large content or scale changes. Moreover, the approaches need preprocessing operations such as image resizing to do well. This restricts the application of image registration for further analysis such as change detection and image fusion. Motivated by the need for efficient registration of multimodal retinal image pairs, this paper introduces a novel integrated approach which exploits features of uniform robust scale invariant feature transform (UR-SIFT) and PIIFD. The approach is robust against low content contrast of color images and large content, appearance, and scale changes between color and other retinal image modalities like the fluorescein angiography. Due to low efficiency of standard SIFT detector for multimodal images, the UR-SIFT algorithm extracts high stable and distinctive features in the full distribution of location and scale in images. Then, feature points are adequate and repeatable. Moreover, the PIIFD descriptor is symmetric to contrast, which makes it suitable for robust multimodal image registration. After the UR-SIFT feature extraction and the PIIFD descriptor generation in images, an initial cross-matching process is performed and followed by a mismatch elimination algorithm. Our dataset consists of 120 pairs of multimodal retinal images. Experiment results show the outperformance of the UR-SIFT-PIIFD over the Harris-PIIFD and similar algorithms in terms of efficiency and positional accuracy.
机译:现有的基于尺度不变特征变换(SIFT)和哈里斯角的算法,例如边缘驱动的双自举迭代最近点和哈里斯局部强度不变特征描述符(PIIFD),它们在配准多模态视网膜图像方面具有很强的鲁棒性。但是,在内容或比例变化较大的情况下,它们无法将彩色视网膜图像与其他方式对齐。此外,这些方法需要预处理操作(例如图像调整大小)才能很好地完成。这限制了图像配准在诸如变化检测和图像融合之类的进一步分析中的应用。出于对多模态视网膜图像对有效​​配准的需求,本文介绍了一种新颖的集成方法,该方法利用了统一鲁棒尺度不变特征变换(UR-SIFT)和PIIFD的特征。该方法对于彩色图像的低含量对比度以及彩色和其他视网膜图像模态(如荧光素血管造影术)之间的大含量,外观和比例变化具有鲁棒性。由于用于多模式图像的标准SIFT检测器效率低下,UR-SIFT算法在图像的位置和比例的完整分布中提取了高度稳定且与众不同的特征。然后,特征点就足够了并且可以重复。此外,PIIFD描述符相对于对比度是对称的,这使其适用于鲁棒的多峰图像配准。在图像中UR-SIFT特征提取和PIIFD描述符生成之后,执行初始交叉匹配过程,然后执行失配消除算法。我们的数据集包含120对多模式视网膜图像。实验结果表明,在效率和位置精度方面,UR-SIFT-PIIFD优于Harris-PIIFD和类似算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号