首页> 外文期刊>Expert Systems with Application >FRED-Net: Fully residual encoder-decoder network for accurate iris segmentation
【24h】

FRED-Net: Fully residual encoder-decoder network for accurate iris segmentation

机译:FRED-Net:完全残差的编码器-解码器网络,用于准确的虹膜分割

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Iris recognition is now developed enough to recognize a person from a distance. The process of iris segmentation plays a vital role in maintaining the accuracy of the iris-based recognition systems by limiting the errors at the current stage. However, its performance is affected by non-ideal situations created by environmental light noise and user non-cooperation. The existing local feature-based segmentation methods are unable to find the true iris boundary in these non-ideal situations, and the error created at the segmentation stage traverses to all the subsequent stages, which results in reduced accuracy and reliability. In addition, it is necessary to segment the true iris boundary without the extra cost of denoising as preprocessing. To overcome these challenging issues during iris segmentation, a deep learning-based fully residual encoder-decoder network (FRED-Net) is proposed to determine the true iris region with the flow of high-frequency information from the preceding layers via residual skip connection.The main four impacts and significances of this study are as follows. First, FRED-Net is an end-to-end semantic segmentation network that does not use conventional image processing schemes, and does not have a preprocessing overhead. It is a standalone network in which eyelid, eyelash, and glint detections are not required to obtain the true iris boundary. Second, the proposed FRED-Net is the final resultant structure of a step-by-step development, and in each step, a new complete variant network is created for semantic segmentation considering the detailed descriptions of the networks. Third, FRED-Net uses the residual connectivity between convolutional layers by the residual shortcut for both encoder and decoder, which enables a high-frequency component to flow through the network and achieve higher accuracy with few layers. Fourth, the performance of the proposed FRED-Net is tested with five different iris datasets under visible and NIR light environments, and two general road scene segmentation datasets. To achieve fair comparisons with other studies, our trained FRED-Net models, along with the algorithms, are made publicly available through our website (Dongguk FRED-Net Model with Algorithm. accessed on 16 May 2018).The experiments include two datasets: Noisy Iris Challenge Evaluation- Part II (NICE-II) selected from the UBIRIS.v2 database and Mobile Iris Challenge Evaluation (MICHE-1), for the visible light environment and three datasets: Institute of Automation, Chinese Academy of Sciences (CASIA) v4.0 interval, v4.0 distance, and IIT Delhi v1.0, for the near-infrared (NIR) light environment. Moreover, to evaluate the performance of the proposed network in general segmentation, experiments with two famous road scene segmentation datasets: Cambridge-driving Labeled Video Database (CamVid) and Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI), are included. The experimental results showed the optimum performance of the proposed FRED-Net on the above-mentioned seven datasets of iris and general road scene segmentation. (C) 2019 Elsevier Ltd. All rights reserved.
机译:虹膜识别现已发展到足以识别远处的人。虹膜分割过程通过限制当前阶段的错误,在维持基于虹膜的识别系统的准确性方面起着至关重要的作用。但是,其性能受环境光噪声和用户不合作产生的非理想情况的影响。现有的基于局部特征的分割方法无法在这些非理想情况下找到真实的虹膜边界,并且在分割阶段产生的错误会遍历所有后续阶段,从而导致准确性和可靠性下降。另外,有必要对真实的虹膜边界进行分割,而不必将噪声降低为预处理的额外费用。为了克服虹膜分割期间的这些挑战性问题,提出了一种基于深度学习的完全残差编码器/解码器网络(FRED-Net),以通过残差跳过连接从前几层获取高频信息,从而确定真实的虹膜区域。这项研究的主要四个影响和意义如下。首先,FRED-Net是不使用常规图像处理方案并且不具有预处理开销的端到端语义分段网络。它是一个独立的网络,其中不需要眼睑,睫毛和闪光检测即可获得真实的虹膜边界。其次,提出的FRED-Net是逐步开发的最终结果结构,并且在每个步骤中,考虑到网络的详细描述,都会创建一个新的完整变体网络进行语义分割。第三,FRED-Net通过编码器和解码器的残余捷径使用卷积层之间的残余连通性,从而使高频分量能够流经网络并以较少的层数实现更高的精度。第四,在可见光和近红外光环境下,使用五个不同的虹膜数据集以及两个常规道路场景分割数据集,对所提出的FRED-Net的性能进行了测试。为了与其他研究进行公平比较,我们训练有素的FRED-Net模型以及算法可通过我们的网站公开获得(东国FRED-Net模型与算法.2018年5月16日访问)。实验包括两个数据集:噪声虹膜挑战评估-从UBIRIS.v2数据库和移动虹膜挑战评估(MICHE-1)中选择的第二部分(NICE-II),用于可见光环境和三个数据集:中国科学院自动化研究所(CASIA)v4 .0间隔,v4.0距离和IIT Delhi v1.0,用于近红外(NIR)光照环境。此外,为了评估提议的网络在一般分割中的性能,还包括使用两个著名的道路场景分割数据集进行的实验:剑桥驾驶标签视频数据库(CamVid)和卡尔斯鲁厄技术学院以及芝加哥丰田技术学院(KITTI)。实验结果表明,提出的FRED-Net在上述七个虹膜和一般道路场景分割数据集上的最佳性能。 (C)2019 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号