首页> 外文期刊>International Journal of Computer Vision >Neither Global Nor Local: Regularized Patch-Based Representation for Single Sample Per Person Face Recognition
【24h】

Neither Global Nor Local: Regularized Patch-Based Representation for Single Sample Per Person Face Recognition

机译:既不是全局也不是局部:基于正则化补丁的每人脸部识别的单个样本表示

获取原文
获取原文并翻译 | 示例
       

摘要

This paper presents a regularized patch-based representation for single sample per person face recognition. We represent each image by a collection of patches and seek their sparse representations under the gallery images patches and intra-class variance dictionaries at the same time. For the reconstruction coefficients of all the patches from the same image, by imposing a group sparsity constraint on the reconstruction coefficients corresponding to the patches from the gallery images, and by imposing a sparsity constraint on the reconstruction coefficients corresponding to the intra-class variance dictionaries, our formulation harvests the advantages of both patch-based image representation and global image representation, i.e. our method overcomes the side effect of those patches which are severely corrupted by the variances in face recognition, while enforcing those less discriminative patches to be constructed by the gallery patches from the right person. Moreover, instead of using the manually designed intra-class variance dictionaries, we propose to learn the intra-class variance dictionaries which not only greatly accelerate the prediction of the probe images but also improve the face recognition accuracy in the single sample per person scenario. Experimental results on the AR, Extended Yale B, CMU-PIE, and LFW datasets show that our method outperforms sparse coding related face recognition methods as well as some other specially designed single sample per person face representation methods, and achieves the best performance. These encouraging results demonstrate the effectiveness of regularized patch-based face representation for single sample per person face recognition.
机译:本文提出了一种针对每个人脸识别单个样本的基于补丁的正则化表示形式。我们通过补丁集合来表示每个图像,并同时在图库图像补丁和类内差异字典下寻找它们的稀疏表示。对于来自同一图像的所有补丁的重构系数,通过在与图库图像中的补丁对应的重构系数上施加组稀疏约束,并在与类内方差字典对应的重构系数上施加稀疏约束,我们的公式既获得了基于补丁的图像表示又具有全局图像表示的优势,即我们的方法克服了那些因人脸识别差异而严重损坏的补丁的副作用,同时强制使用那些具有较低区分性的补丁来构建合适人选的画廊补丁。此外,代替使用人工设计的类内方差字典,我们建议学习类内方差字典,该字典不仅大大加速了探测图像的预测,而且提高了每人单个样本场景中的人脸识别精度。在AR,扩展Yale B,CMU-PIE和LFW数据集上的实验结果表明,我们的方法优于稀疏编码相关的人脸识别方法以及其他一些经过特殊设计的单样本人脸表示方法,并获得了最佳性能。这些令人鼓舞的结果证明了针对每个人脸识别单个样本的基于补丁的正则化脸部表示的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号