首页> 外文会议>International Conference on Digital Image Computing: Techniques and Applications >Strided U-Net Model: Retinal Vessels Segmentation using Dice Loss
【24h】

Strided U-Net Model: Retinal Vessels Segmentation using Dice Loss

机译:交叉U-Net模型:使用骰子损失进行视网膜血管分割

获取原文

摘要

Accurate segmentation of vessels is an arduous task in the analysis of medical images, particularly the extraction of vessels from colored retinal fundus images. Many image processing tactics have been implemented for accurate detection of vessels, but many vessels have been dropped. In this paper, we propose a deep learning method based on the convolutional neural network (CNN) with dice loss function for retinal vessel segmentation. To our knowledge, we are the first to form the CNN on the basis of the dice loss function for the extraction of vessels from a colored retinal image. The pre-processing steps are used to eliminate uneven illumination to make the training process more efficient. We implement the CNN model based on a variational auto-encoder (VAE), which is a modified version of U-Net. Our main contribution to the implementation of CNN is to replace all pooling layers with progressive convolution and deeper layers. It takes the retinal image as input and generates the image of segmented output vessels with the same resolution as the input image. The proposed segmentation method showed better performance than the existing methods on the most used databases, namely: DRIVE and STARE. In addition, it gives a sensitivity of 0.739 on the DRIVE database with an accuracy of 0.948 and a sensitivity of 0.748 on the STARE database with an accuracy of 0.947.
机译:在医学图像分析中,尤其是从彩色视网膜眼底图像中提取血管时,血管的准确分割是一项艰巨的任务。为了精确检测血管,已经实施了许多图像处理策略,但是许多血管已被丢弃。在本文中,我们提出了一种基于具有骰子损失功能的卷积神经网络(CNN)的深度学习方法,用于视网膜血管的分割。据我们所知,我们是第一个根据骰子损失函数形成CNN的方法,用于从彩色视网膜图像中提取血管。预处理步骤用于消除不均匀的照明,从而使训练过程更加有效。我们基于可变自动编码器(VAE)(是U-Net的修改版本)实现CNN模型。我们对CNN的实施的主要贡献是用渐进式卷积和更深层替换所有池化层。它以视网膜图像为输入,并以与输入图像相同的分辨率生成分段输出血管的图像。与大多数使用的数据库(DRIVE和STARE)上的现有方法相比,所提出的分割方法表现出更好的性能。此外,它在DRIVE数据库上的灵敏度为0.739,精度为0.948,在STARE数据库上的灵敏度为0.748,精度为0.947。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号