首页> 外文会议>Society of Photo-Optical Instrumentation Engineers;SPIE Medical Imaging Conference >Two-level Training of a 3d U-Net for Accurate Segmentation of the Intra-cochlear Anatomy in Head CTs with Limited Ground Truth Training Data
【24h】

Two-level Training of a 3d U-Net for Accurate Segmentation of the Intra-cochlear Anatomy in Head CTs with Limited Ground Truth Training Data

机译:使用有限的地面真相训练数据对头部CT中的耳蜗内解剖结构进行精确分割的3d U-Net二级训练

获取原文

摘要

Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to treat patients with hearing loss.For CI recipients, sound bypasses the natural transduction mechanism and directly stimulates the neural regions, thuscreating a sense of hearing. Post-operatively, CIs need to be programmed. Traditionally, this is done by an audiologist whois blind to the positions of the electrodes relative to the cochlea and only relies on the subjective response of the patient.Multiple programming sessions are usually needed, which can take a frustratingly long time. We have developed an imageguidedcochlear implant programming (IGCIP) system to facilitate the process. In IGCIP, we segment the intra-cochlearanatomy and localize the electrode arrays in the patient’s head CT image. By utilizing their spatial relationship, we cansuggest programming settings that can significantly improve hearing outcomes. To segment the intra-cochlear anatomy,we use an active shape model (ASM)-based method. Though it produces satisfactory results in most cases, sub-optimalsegmentation still happens. As an alternative, herein we explore using a deep learning method to perform the segmentationtask. Large image sets with accurate ground truth (in our case manual delineation) are typically needed to train a deeplearning model for segmentation but such a dataset does not exist for our application. To tackle this problem, we usesegmentations generated by the ASM-based method to pre-train the model and fine-tune it on a small image set for whichaccurate manual delineation is available. Using this method, we achieve better results than the ASM-based method.
机译:耳蜗植入物(CIs)使用通过外科手术插入耳蜗的电极阵列来治疗听力损失的患者。 对于CI接收者,声音绕过了自然传导机制,直接刺激了神经区域,因此 产生听觉感。术后,需要对CI进行编程。传统上,这是由听力学家完成的, 对电极相对于耳蜗的位置视而不见,并且仅依赖于患者的主观反应。 通常需要多个编程会话,这可能会花费令人沮丧的长时间。我们已经开发了一个图像引导 人工耳蜗编程(IGCIP)系统可简化该过程。在IGCIP中,我们对耳蜗内进行了细分 解剖并定位患者头部CT图像中的电极阵列。通过利用它们的空间关系,我们可以 建议可以显着改善听力结果的编程设置。为了细分耳蜗内的解剖结构, 我们使用基于主动形状模型(ASM)的方法。尽管在大多数情况下都能产生令人满意的结果,但次优 细分仍然会发生。作为替代方案,在此我们探索使用深度学习方法执行分割 任务。通常需要大图像集和准确的地面真相(在我们的情况下为手动描绘)来训练深层 用于细分的学习模型,但对于我们的应用程序而言,并不存在这样的数据集。为了解决这个问题,我们使用 基于ASM的方法生成的细分,以对模型进行预训练并将其微调到较小的图像集上 可以进行准确的手动描述。使用这种方法,与基于ASM的方法相比,我们可以获得更好的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号