首页> 美国卫生研究院文献>Entropy >Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences
【2h】

Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences

机译:增强静态和视频序列的面部活力检测的深度学习架构

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best current approaches use a two-step process of first applying non-linear anisotropic diffusion to the incoming image and then using a deep network for final liveness decision. Such an approach is not viable for real-time face liveness detection. We develop two end-to-end real-time solutions where nonlinear anisotropic diffusion based on an additive operator splitting scheme is first applied to an incoming static image, which enhances the edges and surface texture, and preserves the boundary locations in the real image. The diffused image is then forwarded to a pre-trained Specialized Convolutional Neural Network (SCNN) and the Inception network version 4, which identify the complex and deep features for face liveness classification. We evaluate the performance of our integrated approach using the SCNN and Inception v4 on the Replay-Attack dataset and Replay-Mobile dataset. The entire architecture is created in such a manner that, once trained, the face liveness detection can be accomplished in real-time. We achieve promising results of 96.03% and 96.21% face liveness detection accuracy with the SCNN, and 94.77% and 95.53% accuracy with the Inception v4, on the Replay-Attack, and Replay-Mobile datasets, respectively. We also develop a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Even though the use of CNN followed by LSTM is not new, combining it with diffusion (that has proven to be the best approach for single image liveness detection) is novel. Performance evaluation of our architecture on the REPLAY-ATTACK dataset gave 98.71% test accuracy and 2.77% Half Total Error Rate (HTER), and on the REPLAY-MOBILE dataset gave 95.41% accuracy and 5.28% HTER.
机译:面部活力检测是面部识别的关键预处理步骤,用于避免面部欺骗攻击,其中冒名顶替者可以模拟有效用户进行认证。虽然最近在提高了面部活力检测的准确性方面已经进行了相当大的研究,但最佳电流方法使用第一步将非线性各向异性扩散施加到入射图像中,然后使用深度网络进行最终的活跃决策。这种方法对于实时面部活力检测不可行。我们开发了两个端到端的实时解决方案,其中基于附加操作员分离方案的非线性各向异性扩散首先应用于进入的静态图像,这增强了边缘和表面纹理,并保留了真实图像中的边界位置。然后将扩散图像转发到预先训练的专用卷积神经网络(SCNN)和初始化网络版本4,其识别面部活力分类的复杂和深度特征。我们使用重放攻击数据集和重播 - 移动数据集的SCNN和Inception V4评估我们的集成方法的性能。整个架构以这样的方式创建,一旦训练,就可以实时完成面部活力检测。我们达到了96.03%和96.21%的有希望的结果,与SCNN,在重放攻击和重放 - 移动数据集中分别使用SCNN和94.77%和95.53%的准确性。我们还开发了一种新的深度架构,用于对使用图像的扩散之后的视频帧的面部活力检测,然后是深度卷积神经网络(CNN)和长短短期存储器(LSTM)来分类为真实或假的视频序列。即使使用CNN的使用后跟LSTM并不新鲜,结合它的扩散(已被证明是单一图像活跃检测的最佳方法)是新颖的。我们在重放攻击数据集上的架构的性能评估给出了98.71%的测试准确性和2.77%的总错误率(HTER),并且在重播 - 移动数据集上,准确度为95.41%和5.28%。

著录项

  • 期刊名称 Entropy
  • 作者

    Ranjana Koshy; Ausif Mahmood;

  • 作者单位
  • 年(卷),期 2020(22),10
  • 年度 2020
  • 页码 1186
  • 总页数 28
  • 原文格式 PDF
  • 正文语种
  • 中图分类
  • 关键词

    机译:面部活力检测;扩散;SCNN;inception v4;cnn-lstm;重播 - 攻击数据集;重播 - 移动数据集;

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号