【24h】

Learning Invariant Deep Representation for NIR-VIS Face Recognition

机译:学习不变的NIR-VIS识别的深度表示

获取原文

摘要

Visual versus near infrared (VIS-NIR) face recognition is still a challenging heterogeneous task due to large appearance difference between VIS and NIR modalities. This paper presents a deep convolutional network approach that uses only one network to map both NIR and VIS images to a compact Euclidean space. The low-level layers of this network are trained only on large-scale VIS data. Each convolutional layer is implemented by the simplest case of maxout operator. The high-level layer is divided into two orthogonal subspaces that contain modality-invariant identity information and modality-variant spectrum information respectively. Our joint formulation leads to an alternating minimization approach for deep representation at the training time and an efficient computation for heterogeneous data at the testing time. Experimental evaluations show that our method achieves 94% verification rate at FAR=0.1% on the challenging CASIA NIR-VIS 2.0 face recognition dataset. Compared with state-of-the-art methods, it reduces the error rate by 58% only with a compact 64-D representation.
机译:由于VIR和NIR模式之间的大外观差异,视觉与近红外(VIS-NIR)面部识别仍然是一个具有挑战性的异构任务。本文提出了一种深度卷积网络方法,仅使用一个网络来将NIR映射到一个紧凑的欧几里德空间。该网络的低级层仅在大型VIS数据上培训。每个卷积层都是通过最简单的MaxOut运算符实现的。高级层被分成两个正交的子空间,分别包含模态 - 不变的身份信息和模态 - 变体频谱信息。我们的关节配方导致了在训练时间的深度表示的交替最小化方法以及在测试时间的异构数据的有效计算。实验评估表明,我们的方法达到了94%的验证率= 0.1%的验证Casia NIR-VIS 2.0面部识别数据集。与最先进的方法相比,它仅通过紧凑的64-D表示将错误率降低了58%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号