...
首页> 外文期刊>IEEE Transactions on Medical Imaging >UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation
【24h】

UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation

机译:UNET ++:重新设计跳过连接以利用图像分段中的多尺度功能

获取原文
获取原文并翻译 | 示例
           

摘要

The state-of-the-art models for medical image segmentation are variants of U-Net and fully convolutional networks (FCN). Despite their success, these models have two limitations: (1) their optimal depth is apriori unknown, requiring extensive architecture search or inefficient ensemble of models of varying depths; and (2) their skip connections impose an unnecessarily restrictive fusion scheme, forcing aggregation only at the same-scale feature maps of the encoder and decoder sub-networks. To overcome these two limitations, we propose UNet++, a new neural architecture for semantic and instance segmentation, by (1) alleviating the unknown network depth with an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision; (2) redesigning skip connections to aggregate features of varying semantic scales at the decoder sub-networks, leading to a highly flexible feature fusion scheme; and (3) devising a pruning scheme to accelerate the inference speed of UNet++. We have evaluated UNet++ using six different medical image segmentation datasets, covering multiple imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and electron microscopy (EM), and demonstrating that (1) UNet++ consistently outperforms the baseline models for the task of semantic segmentation across different datasets and backbone architectures; (2) UNet++ enhances segmentation quality of varying-size objects-an improvement over the fixed-depth U-Net; (3) Mask RCNN++ (Mask R-CNN with UNet++ design) outperforms the original Mask R-CNN for the task of instance segmentation; and (4) pruned UNet++ models achieve significant speedup while showing only modest performance degradation. Our implementation and pre-trained models are available at https://github.com/MrGiovanni/UNetPlusPlus.
机译:用于医学图像分割的最先进模型是U-Net和完全卷积网络(FCN)的变体。尽管取得了成功,这些模型有两个限制:(1)他们的最佳深度是Apriori未知,需要广泛的建筑搜索或效率低于不同深度的模型的效率; (2)他们的跳过连接施加不必要的限制融合方案,仅在编码器和解码器子网的相同规模特征映射处强制聚合。为了克服这两个限制,我们提出了一个关于语义和实例分割的新神经架构的UNET ++,(1)用不同深度的U-Net的有效集合来缓解未知的网络深度,部分地分享编码器和共同学习同时使用深度监督; (2)重新设计跳过连接在解码器子网中的不同语义秤的聚合功能,导致高度灵活的特征融合方案; (3)设计修剪计划以加速UNET ++的推广速度。我们使用六种不同的医学图像分割数据集进行了评估的UNET ++,覆盖多个成像模码,例如计算机断层摄影(CT),磁共振成像(MRI)和电子显微镜(EM),并展示(1)UNET ++始终如一地优于基线模型对于不同数据集和骨干架构的语义细分任务; (2)UNET ++增强了不同尺寸对象的分割质量 - 改进固定深度U-NET; (3)掩模RCNN ++(带有UNET ++设计的掩模R-CNN)优于原始掩模R-CNN,以进行实例分段的任务; (4)修剪的UNET ++模型实现了显着的加速,同时仅显示了适度的性能下降。我们的实施和预先训练的模型可在HTTPS://github.com/mrgiovanni/unetplusplus提供。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号