首页> 外文会议>Conference on Medical Imaging 2008: Computer-Aided Diagnosis; 20080219-21; San Diego,CA(US) >Improving supervised classification accuracy using non-rigid multimodal image registration: Detecting Prostate Cancer
【24h】

Improving supervised classification accuracy using non-rigid multimodal image registration: Detecting Prostate Cancer

机译:使用非刚性多峰图像配准提高监督分类的准确性:检测前列腺癌

获取原文
获取原文并翻译 | 示例

摘要

Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affine schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that obtained from expert annotation. These results suggest that in the presence of additional multimodal image information one can obtain more accurate object annotations than achievable via expert delineation despite vast differences between modalities that hinder image registration.
机译:用于检测医学图像中癌症的计算机辅助诊断(CAD)系统需要对训练数据进行精确标记。对于前列腺的磁共振(MR)成像(MRI),训练标签定义了前列腺癌(CaP)的空间范围;这些标签最常见的来源是专家细分。当提供了辅助数据(例如,整个组织学(WMH)部分)为癌症事实提供黄金标准时,可以通过参考WMH来改善CaP的人工标记。但是,手动分段容易出错,耗时且不可重现。因此,我们提出使用多模式图像配准,以在将两种模式对齐之后,将CaP从组织学自动准确地转录到MRI上,以提高训练数据的质量,从而提高分类器的性能。我们通过将其结果与专家放射科医生的手动CaP注释进行比较,定量地证明了这种基于注册的方法的优越性。使用专家获得的MRI上CaP范围标签和4种不同的注册技术对五个监督的CAD分类器进行了训练。两种注册方法是仿射方案;一种是基于最大化互信息(MI)的方法,另一种是我们先前开发的方法,即组合特征集合互信息(COFEMI),它结合了用于鲁棒多模式注册的高阶统计特征。通过使用薄板样条(TPS)进行弹性变形步骤的两种仿射配准方法,获得了两种非刚性方案。在MRI上没有确定CaP程度的确切基础事实的情况下,针对通过专家和注册细分的不同组合获得的7种基础事实替代指标,评估了分类器的准确性。对于26个多峰MRI-WMH图像对,与从专家注释获得的相比,所有四种配准方法在接收器工作特性曲线下均产生了更大的面积。这些结果表明,在存在附加的多模态图像信息的情况下,尽管模态之间的巨大差异阻碍了图像配准,但可以获得比专家描述更精确的对象注释。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号