首页> 外文期刊>電子情報通信学会技術研究報告 >Face recognition based on virtual frontal view generation using LVTM with local patches clustering
【24h】

Face recognition based on virtual frontal view generation using LVTM with local patches clustering

机译:使用LVTM和局部补丁聚类的虚拟正面视图生成人脸识别

获取原文
获取原文并翻译 | 示例
       

摘要

One of the major difficulties encountered by face recognition is the varying poses caused by in-depth rotations. The intra-person appearance differences caused by rotations are often larger than the inter-person differences, which makes the traditional face recognition methods such as eigen-face infeasible. This paper presents a framework for face recognition across pose based on virtual frontal view generation using Local View Transition Model(LVTM) with local patches clustering. Previous study on LVTM shows that more accurate appearance transition model can be achieved by first dividing the original face image plane into overlapping local patch regions and then the learned transition models for each patch are aggregated for the final transformation. In this paper we show that the accuracy the appearance transition model and the recognition rate can be further improved by better exploiting the inherent linear relationship between frontal-nonfrontal face image pairs. This is achieved based on the observation that variations in appearance caused by pose are closely related to the corresponding 3D face structure and intuitively frontal-nonfrontal pairs from more similar local 3D face structures should have a stronger linear relationship. For each specific location, instead of learning a common transformation as in LVTM, the corresponding local patches are first clustered based on appearance similarity distance metric and then the transition models are learned separately for each cluster. In the testing stage, each local patch for the input nonfrontal probe image is transformed using the learned local view transition model corresponding to the most visually similar cluster. The experimental results on real life face dataset demonstrate the effectiveness of the proposed method.
机译:面部识别遇到的主要困难之一是深度旋转引起的姿势变化。旋转引起的人内外观差异通常大于人际差异,这使得传统的人脸识别方法(例如本征人脸)不可行。本文提出了一个基于虚拟正面视图的人脸识别框架,该虚拟正面视图使用具有局部补丁聚类的局部视图转换模型(LVTM)生成。 LVTM的先前研究表明,可以通过先将原始人脸图像平面划分为重叠的局部补丁区域,然后为每个补丁汇总学习到的过渡模型来进行最终转换,来获得更准确的外观过渡模型。在本文中,我们表明,通过更好地利用正面与非正面人脸图像对之间的固有线性关系,可以进一步提高外观过渡模型的准确性和识别率。这是基于以下观察结果而实现的:由姿势引起的外观变化与相应的3D面部结构密切相关,并且更直观的局部3D面部结构的直观的正面-非正面对应该具有更强的线性关系。对于每个特定位置,不是像LVTM中那样学习通用变换,而是首先根据外观相似性距离度量对相应的局部补丁进行聚类,然后为每个聚类分别学习过渡模型。在测试阶段,使用与最视觉相似的群集相对应的学习的局部视图转换模型来转换输入的非正面探针图像的每个局部补丁。在真实人脸数据集上的实验结果证明了该方法的有效性。

著录项

  • 来源
    《電子情報通信学会技術研究報告》 |2012年第430期|p.31-36|共6页
  • 作者单位

    Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan;

    Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan,Faculty of Economics and Information, Gifu Shotoku Gakuen University, Nakauzura 1-38, Gifu, Gifu, 500-8288, Japan;

    Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan;

    Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan;

    Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    face recognition; cross pose; view transition model; local patch; clustering;

    机译:人脸识别;交叉姿势查看过渡模型;本地补丁聚类;
  • 入库时间 2022-08-18 00:28:43

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号