首页> 中文期刊> 《电子与信息学报》 >一种混合特征高效融合的视网膜血管分割方法

一种混合特征高效融合的视网膜血管分割方法

         

摘要

将机器学习运用到视网膜血管分割当中已成为一种趋势,然而选取什么特征作为血管与非血管的特征仍为众所思考的问题.该文利用将血管像素与非血管像素看作二分类的原理,提出一种混合的5D特征作为血管像素与非血管像素的表达,从而能够简单快速地将视网膜血管从背景中分割开来.其中5D特征向量包括CLAHE(Contrast Limited Adaptive Histgram Equalization),高斯匹配滤波,Hesse矩阵变换,形态学底帽变换,B-COSFIRE(Bar-selective Combination Of Shifted FIlter REsponses),通过将融合特征输入SVM(支持向量机)分类器训练得到所需的模型.通过在DRIVE和STARE数据库进行实验分析,利用Se,Sp,Acc,Ppv,Npv,F1-measure等常规评价指标来检测分割效果,其中平均准确率分别达到0.9573和0.9575,结果显示该融合方法比单独使用B-COSFIRE或者其他目前所提出的融合特征方法更准确有效.%How to apply machine learning to retinal vessel segmentation effectively has become a trend, however, choosing what kind of features for the blood vessels is still a problem. In this paper, the blood vessels of pixels are regarded as a theory of binary classification, and a hybrid 5D features for each pixel is put forward to extract retinal blood vessels from the background simplely and quickly. The 5D eigenvector includes Contrast Limited Adaptive Histgram Equalization (CLAHE), Gaussian matched filter, Hessian matrix transform, morphological bottom hat transform and Bar-selective Combination Of Shifted Filter Responses (B-COSFIRE). Then the fusion features are input into the Support Vector Machine (SVM) classifier to train a model that is needed. The proposed method is evaluated on two publicly available datasets of DRIVE and STARE, respectively. Se, Sp, Acc, Ppv, Npv, F1-measure are used to test the proposed method, and average classification accuracies are 0.9573 and 0.9575 on the DRIVE and STARE datasets, respectively. Performance results show that the fusion method also outperform the state-of-the-art method including B-COSFIRE and other currently proposed fusion features method.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号