首页> 外文期刊>Medical image analysis >'Squeeze & excite' guided few-shot segmentation of volumetric images
【24h】

'Squeeze & excite' guided few-shot segmentation of volumetric images

机译:'挤压和激发'引导的少量图像的少量分割

获取原文
获取原文并翻译 | 示例
       

摘要

Deep neural networks enable highly accurate image segmentation, but require large amounts of manually annotated data for supervised training. Few-shot learning aims to address this shortcoming by learning a new class from a few annotated support examples. We introduce, a novel few-shot framework, for the segmentation of volumetric medical images with only a few annotated slices. Compared to other related works in computer vision, the major challenges are the absence of pre-trained networks and the volumetric nature of medical scans. We address these challenges by proposing a new architecture for few-shot segmentation that incorporates 'squeeze & excite' blocks. Our two-armed architecture consists of a conditioner arm, which processes the annotated support input and generates a task-specific representation. This representation is passed on to the segmenter arm that uses this information to segment the new query image. To facilitate efficient interaction between the conditioner and the segmenter arm, we propose to use 'channel squeeze & spatial excitation' blocks - a light-weight computational module - that enables heavy interaction between both the arms with negligible increase in model complexity. This contribution allows us to perform image segmentation without relying on a pre-trained model, which generally is unavailable for medical scans. Furthermore, we propose an efficient strategy for volumetric segmentation by optimally pairing a few slices of the support volume to all the slices of the query volume. We perform experiments for organ segmentation on whole-body contrast-enhanced CT scans from the Visceral Dataset. Our proposed model outperforms multiple baselines and existing approaches with respect to the segmentation accuracy by a significant margin. The source code is available at https://github.comiabhi4ssj/few-shot-segmentation. (C) 2019 Elsevier B.V. All rights reserved.
机译:深度神经网络能够高度准确的图像分割,但需要大量的手动注释的监督培训数据。少量学习旨在通过从一些注释的支持示例中学习新课程来解决这种缺点。我们介绍了一种新颖的几次拍摄框架,用于分割体积的医学图像,只有几个注释的切片。与计算机愿景中的其他相关工程相比,主要挑战是缺乏预先训练的网络和医疗扫描的体积性质。我们通过提出一个包含“挤压和激发”块的几次分割的新架构来解决这些挑战。我们的双臂架构由调节器ARM组成,处理注释的支持输入并生成特定于任务的表示。将该表示传递给分段器ARM,用于将此信息段分段为新的查询映像。为了便于调节器和分割器臂之间有效的相互作用,我们建议使用“通道挤压和空间激发”块 - 轻量级计算模块 - 这使得双臂之间的重量相互作用可以忽略于模型复杂性。此贡献允许我们在不依赖于预先训练的模型的情况下执行图像分割,这通常不适用于医疗扫描。此外,我们通过最佳地将支持体积与查询卷的所有切片进行最佳地将几个片段分组,提出了体积分割的有效策略。我们对来自内脏数据集的全身对比度增强CT扫描的器官分割执行实验。我们所提出的模型优于多个基线和现有方法,并通过显着的边缘来实现分割精度。源代码可在https://github.Combiabhi4ssj/few-shot-segation中获得。 (c)2019年Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号