首页> 外文会议>International conference on information processing in medical imaging >A~3DSegNet: Anatomy-Aware Artifact Disentanglement and Segmentation Network for Unpaired Segmentation, Artifact Reduction, and Modality Translation
【24h】

A~3DSegNet: Anatomy-Aware Artifact Disentanglement and Segmentation Network for Unpaired Segmentation, Artifact Reduction, and Modality Translation

机译:A〜3DSEGNET:解剖学感知神器解剖学和分割网络,用于未配对分割,文物减少和模态翻译

获取原文

摘要

Spinal surgery planning necessitates automatic segmentation of vertebrae in cone-beam computed tomography (CBCT), an intraop-erative imaging modality that is widely used in intervention. However, CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects, causing vertebra segmentation, even manually, a demanding task. In contrast, there exists a wealth of artifact-free, high quality CT images with vertebra annotations. This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations. To overcome the domain and artifact gaps between CBCT and CT, it is a must to address the three heterogeneous tasks of vertebra segmentation, artifact reduction and modality translation all together. To this, we propose a novel anatomy-aware artifact disentanglement and segmentation network (A~3DSegNet) that intensively leverages knowledge sharing of these three tasks to promote learning. Specifically, it takes a random pair of CBCT and CT images as the input and manipulates the synthesis and segmentation via different decoding combinations from the disentangled latent layers. Then, by proposing various forms of consistency among the synthesized images and among segmented vertebrae, the learning is achieved without paired (i.e., anatomically identical) data. Finally, we stack 2D slices together and build 3D networks on top to obtain final 3D segmentation result. Extensive experiments on a large number of clinical CBCT (21,364) and CT (17,089) images show that the proposed A~3DSegNet performs significantly better than state-of-the-art competing methods trained independently for each task and, remarkably, it achieves an average Dice coefficient of 0.926 for unpaired 3D CBCT vertebra segmentation.
机译:脊柱手术规划需要在锥形束计算机断层扫描(CBCT)中的椎骨自动分割,一种广泛用于干预的腹部腹部成像模态。然而,由于噪声,组织对比度,差的组织对比和存在金属物体的存在,CBCT图像具有低质量和神器 - 装载,甚至手动地,甚至手动地,甚至手动地,甚至手动。相比之下,有丰富的无伪像,高质量的CT图像,具有椎骨注释。这使我们能够使用带有注释的未配对CT图像构建CBCT椎骨分段模型。为了克服CBCT和CT之间的域和伪影差距,必须解决椎骨分割,文物减少和模特翻译的三个异构任务。为此,我们提出了一种新颖的解剖学感知神器解剖和分割网络(A〜3DSEGNET),密集地利用这三个任务的知识共享来促进学习。具体地,它采用随机的CBCT和CT图像作为输入,并通过来自解除垂直的潜在的不同解码组合来操纵合成和分割。然后,通过提出合成图像和分段椎骨之间的各种形式的一致性,在没有配对的(即解剖学相同)的数据的情况下实现了学习。最后,我们将2D切片堆叠在一起并在顶部构建3D网络以获得最终的3D分段结果。大量临床CBCT(21,364)和CT(17,089)图像的广泛实验表明,所提出的a〜3dsegnet比为每项任务独立培训的最先进的竞争方法表现出明显更好,并且显着达到了未配对3D CBCT椎骨分割的平均骰子系数为0.926。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号