首页> 外文期刊>Image Processing, IEEE Transactions on >Gait-Based Person Recognition Using Arbitrary View Transformation Model
【24h】

Gait-Based Person Recognition Using Arbitrary View Transformation Model

机译:任意视图变换模型的基于步态的人识别

获取原文
获取原文并翻译 | 示例

摘要

Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.
机译:步态识别是用于人认证的有用的生物特征,因为即使在低图像分辨率下也可以使用。挑战之一是视图更改的鲁棒性(跨视图匹配);视图转换模型(VTM)已被提出来解决这个问题。如果目标视图与其离散训练视图相同,则VTM可以很好地工作。但是,在真实情况下,可以从任意角度观察步态特征。因此,目标视图可能与离散的训练视图不一致,从而导致识别精度下降。我们提出了一个任意VTM(AVTM),它可以从任意角度准确匹配一对步态特征。为了实现AVTM,我们首先构造训练对象的3D步态音量序列,与目标场景中的测试对象不相交。然后,我们通过将3D步态体积序列投影到与目标视图相同的视图上来生成训练对象的2D步态轮廓序列,并使用从2D序列中提取的步态特征训练AVTM。另外,我们通过合并部分依赖的视图选择方案(AVTM_PdVS)扩展了我们的AVTM,该方案将步态特征划分为多个部分,并设置了部分依赖的目标视图进行转换。由于不同身体部位的适当目标视图可能有所不同,因此依赖于零件的目标视图选择可以抑制变换错误,从而提高识别精度。使用在不同设置中收集的数据集进行的实验表明,在许多情况下,特别是在验证场景中,AVTM提高了跨视图匹配的准确性,而AVTM_PdVS进一步提高了准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号