首页> 外文会议>Conference on Medical Imaging: Image Processing >Feature-based Retinal Image Registration for Longitudinal Analysis of Patients with Age-related Macular Degeneration
【24h】

Feature-based Retinal Image Registration for Longitudinal Analysis of Patients with Age-related Macular Degeneration

机译:基于特征的视网膜图像配准纵向分析年龄相关性黄斑变性的患者

获取原文

摘要

Purpose: Spatial alignment of longitudinally acquired retinal images is necessary for the development of image-based metrics identifying structural features associated with disease progression in diseases such as age-related macular degeneration (AMD). This work develops and evaluates a feature-based registration framework for accurate and robust registration of retinal images. Methods: Two feature-based registration approaches were investigated for the alignment of fundus auto-fluorescence images. The first method used conventional SIFT local feature descriptors to solve for the geometric transformation between two corresponding point-sets. The second method used a deep-learning approach with a network architecture mirroring the feature localization and matching process of the conventional method. The methods were validated using clinical images acquired in an ongoing longitudinal study of AMD and consisted of 75 patients (145 eyes) with 4 year follow up imaging. In the deep-learning method, 113 image pairs were used during training (with the ground truth provided from manually verified SIFT feature registration) and 20 image pairs were used for testing (with the ground truth provided from manual landmark annotation). Results: Conventional method using SIFT features demonstrated target registration error (mean ± std) = 0.05 ± 0.04 mm, substantially improving the alignment from the initialization with error = 0.34 ± 0.22 mm. The deep-learning method, on the other hand, exhibited error = 0.10 ± 0.07 mm. While both methods improved upon the initial misalignment, SIFT method showed the best overall geometric accuracy. However, deep-learning method exhibited robust performance (error = 0.15 ± 0.09 mm) in the 7% of cases that SIFT method exhibited failures (error = 3.71 ± 6.36 mm). Conclusion: While both methods demonstrated successful performance, SIFT method exhibited the best overall geometric accuracy whereas deep-learning method was superior in terms of robustness. Achieving accurate and robust registration is essential in large-scale studies investigating factors underlying retinal disease progression such as in AMD.
机译:目的:纵向获取的视网膜图像的空间对齐对于开发基于图像的指标是必要的,该指标可确定与疾病相关的疾病进展的结构特征,例如年龄相关性黄斑变性(AMD)。这项工作开发和评估基于特征的配准框架,用于视网膜图像的准确和鲁棒配准。方法:研究了两种基于特征的配准方法,用于眼底自发荧光图像的对准。第一种方法使用常规的SIFT局部特征描述符来解决两个对应点集之间的几何变换。第二种方法使用具有网络架构的深度学习方法,该方法反映了传统方法的特征定位和匹配过程。该方法已使用正在进行的AMD纵向研究中获得的临床图像进行了验证,该临床图像由75位患者(145眼)组成,并进行了4年的随访成像。在深度学习方法中,训练期间使用了113对图像对(通过人工验证的SIFT特征配准提供了地面真实性),测试使用了20对图像(通过人工地标注释提供了地面真实性)。结果:使用SIFT功能的常规方法显示目标配准误差(平均值±std)= 0.05±0.04 mm,从初始化开始,误差= 0.34±0.22 mm,大大提高了对准效果。另一方面,深度学习方法的误差为0.10±0.07 mm。虽然这两种方法都在初始不对准方面有所改善,但SIFT方法显示出最佳的总体几何精度。但是,在SIFT方法出现故障(误差= 3.71±6.36 mm)的7%的情况下,深度学习方法表现出强大的性能(误差= 0.15±0.09 mm)。结论:虽然两种方法均显示出成功的性能,但SIFT方法表现出最佳的总体几何精度,而深度学习方法在鲁棒性方面则优越。在大规模调查视网膜疾病进展相关因素的研究中,例如在AMD中,实现准确而可靠的配准是必不可少的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号