...
首页> 外文期刊>IEEE Transactions on Biometrics, Behavior, and Identity Science >Video Face Recognition Using Siamese Networks With Block-Sparsity Matching
【24h】

Video Face Recognition Using Siamese Networks With Block-Sparsity Matching

机译:使用具有块稀疏性匹配的暹罗网络的视频人脸识别

获取原文
获取原文并翻译 | 示例

摘要

Deep learning models for still-to-video FR typically provide a low level of accuracy because faces captured in unconstrained videos are matched against a reference gallery comprised of a single facial still per individual. For improved robustness to intra-class variations, deep Siamese networks have recently been used for pair-wise face matching. Although these networks can improve state-of-the-art accuracy, the absence of prior knowledge from the target domain means that many images must be collected to account for all possible capture conditions, which is not practical for many real-world surveillance applications. In this paper, we propose the deep SiamSRC network that employs block-sparsity for face matching, while the reference gallery is augmented with a compact set of domain-specific facial images. Prior to deployment, clustering based on row sparsity is performed on unlabelled faces captured in videos from the target domain. Cluster centers discovered in the capture condition space (defined by, e.g., pose, scale and illumination) are used as rendering parameters with an off-the-shelf 3D face model, and a compact set of synthetic faces are thereby generated for each reference still based on representative intra-class information from the target domain. For pair-wise similarity matching with query facial images, the SiamSRC exploits sparse representation-based classification with a block structure. Experimental results obtained with the videos from the Chokepoint and COX-S2V datasets indicate that the proposed SiamSRC network can outperform state-of-the-art methods for still-to-video FR with a single sample per person, with only a moderate increase in computational complexity.
机译:静止视频FR的深度学习模型通常提供低的精度水平,因为在无约束视频中捕获的面与每个单独的单个面部面部构成的参考库匹配。为了提高对类内变型的鲁棒性,深度暹罗网络最近被用于配对脸部匹配。虽然这些网络可以提高最先进的准确性,但是从目标域中没有现有知识意味着必须收集许多图像以考虑所有可能的捕获条件,这对于许多现实世界监视应用不实用。在本文中,我们提出了用于面部匹配的块稀疏性的深度SIAMSRC网络,而参考库用一组紧凑的域特定的面部图像增强。在部署之前,在从目标域中的视频中捕获的未标记面上执行基于行稀疏的群集。在捕获条件空间中发现的群集中心(由例如,姿势,刻度和照明定义)用作带有离心3D面部模型的渲染参数,从而为每个参考产生紧凑的合成面组基于目标域的代表性内部信息。对于与查询面部图像的配对相似性匹配,SIAMSRC利用基于稀疏表示的基于稀疏表示的分类。来自ChokePoint和Cox-S2V数据集的视频获得的实验结果表明,所提出的SIAMSRC网络可以以每人单个样本为静止视频FR的最先进的方法,只有中度增加计算复杂性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号