首页> 外文会议>Annual International Conference of the IEEE Engineering in Medicine and Biology Society >Learning-based multi-modal rigid image registration by using Bhattacharyya distances
【24h】

Learning-based multi-modal rigid image registration by using Bhattacharyya distances

机译:基于学习的多模态刚性图像配准通过使用Bhattacharyya距离

获取原文

摘要

Multi-modal image registration is a momentous technology in medical image processing and analysis. In order to improve the robustness and accuracy of multi-modal rigid image registration, a novel learning-based dissimilarity function is proposed in this paper. This novel dissimilarity function is based on measuring the dissimilarity between the joint intensity distribution of the testing image pair and the expected intensity distributions, which is learned from a registered image pair, with Bhattacharyya distances. Then, the aim of the registration process is to minimize the dissimilarity function. Eight hundred randomized CT — T1 registrations were performed and evaluated by the Retrospective Image Registration Evaluation (RIRE) project. The experimental results demonstrate that the proposed method can achieve higher robustness and accuracy, as compared with a closely related approach and a state-of-the-art method.
机译:多模态图像配准是医学图像处理和分析中的重要技术。为了提高多模态刚性图像配准的鲁棒性和准确性,本文提出了一种新的基于学习的不相似功能。该新颖的异化功能基于测量测试图像对的关节强度分布与预期强度分布之间的异化,这是从登记的图像对中学习的,其中Bhattacharyya距离。然后,注册过程的目的是最小化不相似函数。通过回顾性图像登记评估(RIRE)项目进行八百个随机CT-T1注册。实验结果表明,与密切相关的方法和最先进的方法相比,所提出的方法可以实现更高的稳健性和准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号