首页> 外文会议>International Conference on Digital Image Computing: Techniques and Applications >Strided U-Net Model: Retinal Vessels Segmentation using Dice Loss
【24h】

Strided U-Net Model: Retinal Vessels Segmentation using Dice Loss

机译:Strive U-Net模型:使用骰子损失的视网膜血管分割

获取原文

摘要

Accurate segmentation of vessels is an arduous task in the analysis of medical images, particularly the extraction of vessels from colored retinal fundus images. Many image processing tactics have been implemented for accurate detection of vessels, but many vessels have been dropped. In this paper, we propose a deep learning method based on the convolutional neural network (CNN) with dice loss function for retinal vessel segmentation. To our knowledge, we are the first to form the CNN on the basis of the dice loss function for the extraction of vessels from a colored retinal image. The pre-processing steps are used to eliminate uneven illumination to make the training process more efficient. We implement the CNN model based on a variational auto-encoder (VAE), which is a modified version of U-Net. Our main contribution to the implementation of CNN is to replace all pooling layers with progressive convolution and deeper layers. It takes the retinal image as input and generates the image of segmented output vessels with the same resolution as the input image. The proposed segmentation method showed better performance than the existing methods on the most used databases, namely: DRIVE and STARE. In addition, it gives a sensitivity of 0.739 on the DRIVE database with an accuracy of 0.948 and a sensitivity of 0.748 on the STARE database with an accuracy of 0.947.
机译:血管的精确分割是在医学图像分析中的艰巨任务,特别是从彩色视网膜眼底图像中提取血管。已经实施了许多图像处理策略以准确地检测容器,但是已经丢弃了许多船只。在本文中,我们提出了一种基于卷积神经网络(CNN)的深度学习方法,具有视网膜血管分割的骰子损失函数。据我们所知,我们是第一个基于骰子损失功能来形成CNN,用于从彩色视网膜图像中提取血管。预处理步骤用于消除不均匀的照明,使训练过程更有效。我们基于变形式自动编码器(VAE)来实现CNN模型,这是U-Net的修改版本。我们对执行CNN实施的主要贡献是用渐进式卷积和更深层层取代所有汇集层。它需要视网膜图像作为输入,并产生具有与输入图像相同分辨率的分段输出容器的图像。所提出的分段方法显示出比最常用数据库上的现有方法更好的性能,即:驱动和凝视。此外,它在驱动数据库上提供0.739的灵敏度,精度为0.948,凝视数据库上的灵敏度为0.948,精度为0.947。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号