首页> 外文期刊>Journal of Modern Optics >Generating virtual training samples for sparse representation of face images and face recognition
【24h】

Generating virtual training samples for sparse representation of face images and face recognition

机译:生成虚拟训练样本以稀疏表示人脸图像和人脸识别

获取原文
获取原文并翻译 | 示例
           

摘要

There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.
机译:人脸识别面临许多挑战。在现实世界的场景中,同一张脸的图像会随着照明的变化,表情和姿势的不同,装饰形式的多样化甚至精神状态的改变而发生变化。有限的可用训练样本无法在训练阶段充分传达这些可能的变化,这已成为提高人脸识别精度的限制之一。在本文中,我们将两张脸部图像的相乘视为虚拟脸部图像,以扩展训练集并设计出一种基于表示的方法来执行脸部识别。生成的虚拟样本确实反映了某些可能的外观和面部姿势变化。通过将训练样本与同一主题的另一个样本相乘,我们可以增强面部轮廓特征并极大地抑制噪音。因此,保留了更多的人类基本信息。而且,训练数据的不确定性随着训练样本的增加而同时减小,这对于训练阶段是有利的。设计的基于表示的分类器使用原始样本和新生成的样本来执行分类。在分类阶段,我们首先通过计算测试样本和训练样本之间的欧式距离来确定当前测试样本的K个最近的训练样本。然后,将这些选择的训练样本的线性组合用于表示测试样本,并将表示结果用于对测试样本进行分类。实验结果表明,该方法优于某些最新的人脸识别方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号