首页> 外文期刊>IEEE Transactions on Image Processing >Cross-View Gait Recognition by Discriminative Feature Learning
【24h】

Cross-View Gait Recognition by Discriminative Feature Learning

机译:通过歧视特征学习越来的步态识别

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, deep learning-based cross-view gait recognition has become popular owing to the strong capacity of convolutional neural networks (CNNs). Current deep learning methods often rely on loss functions used widely in the task of face recognition, e.g., contrastive loss and triplet loss. These loss functions have the problem of hard negative mining. In this paper, a robust, effective, and gait-related loss function, called angle center loss (ACL), is proposed to learn discriminative gait features. The proposed loss function is robust to different local parts and temporal window sizes. Different from center loss which learns a center for each identity, the proposed loss function learns multiple sub-centers for each angle of the same identity. Only the largest distance between the anchor feature and the corresponding cross-view sub-centers is penalized, which achieves better intra-subject compactness. We also propose to extract discriminative spatialtemporal features by local feature extractors and a temporal attention model. A simplified spatial transformer network is proposed to localize the suitable horizontal parts of the human body. Local gait features for each horizontal part are extracted and then concatenated as the descriptor. We introduce long short-term memory (LSTM) units as the temporal attention model to learn the attention score for each frame, e.g., focusing more on discriminative frames and less on frames with bad quality. The temporal attention model shows better performance than the temporal average pooling or gait energy images (GEI). By combing the three aspects, we achieve state-of-the-art results on several cross-view gait recognition benchmarks.
机译:最近,由于卷积神经网络(CNNS)的强大能力,深入学习的跨视网球认可已经流行。目前的深度学习方法往往依赖于广泛使用的损耗函数在人脸识别的任务中,例如对比损失和三重态损失。这些损失函数具有难以消耗的问题。在本文中,提出了一种被称为角度损耗(ACL)的稳健,有效和远古相关损失功能,以学习鉴别的步态特征。所提出的损失功能对不同的本地部件和时间窗尺寸具有强大。不同于中央损失,该中心损耗学习每个身份的中心,所提出的损失函数为每个角度学习多个子中心。仅处于锚固特征和相应的横视子中心之间的最大距离是惩罚,这实现了更好的主题内部紧凑性。我们还建议通过局部特征提取器和临时注意模型提取歧视性的季兆特征。提出了一种简化的空间变压器网络以定位人体的合适水平部件。提取每个水平部分的本地步态功能,然后作为描述符连接。我们引入了长期的短期内存(LSTM)单位作为时间注意模型,以了解每个帧的注意评分,例如,以质量不好的帧识别帧,较少。时间注意模型显示出比时间平均池或步态能量图像(GEI)的性能更好。通过梳理三个方面,我们在几个跨视图步态识别基准上实现最先进的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号