【24h】

Attention-Based Network for Cross-View Gait Recognition

机译:基于注意力的网络用于跨步态步态识别

获取原文
获取外文期刊封面目录资料

摘要

Existing gait recognition approaches based on CNN (Convolutional Neural Network) extract features from different human parts indiscriminately, without consideration of spatial heterogeneity. This may cause a loss of discriminative information for gait recognition, since different human parts vary in shape, movement constraints and so on. In this work, we devise an attention-based embedding network to address this problem. The attention module incorporated in our network assigns different saliency weights to different parts in feature maps at pixel level. The embedding network strives to embed gait features into low-dimensional latent space such that similarities can be simply measured by Euclidian distance. To achieve this goal, a combination of contrastive loss and triplet loss is utilized for training. Experiments demonstrate that our proposed network prevails over the state-of-the-art works on both OULP and MVLP dataset under cross-view conditions. Notably, we achieve 6.4% rank-1 recognition accuracy improvement under 90° angular difference on MVLP and 3.6% under 30° angular difference on OULP.
机译:现有的基于CNN(卷积神经网络)的步态识别方法可不加选择地从不同人体部位提取特征,而无需考虑空间异质性。由于不同的人体部位在形状,运动约束等方面有所不同,因此可能会丢失用于步态识别的区分性信息。在这项工作中,我们设计了一个基于注意力的嵌入网络来解决此问题。我们网络中包含的注意力模块将不同的显着性权重分配给像素级别的特征图中的不同部分。嵌入网络努力将步态特征嵌入到低维潜在空间中,以便可以通过欧几里得距离简单地测量相似性。为了实现这个目标,将对比损失和三重损失的组合用于训练。实验表明,在交叉观察条件下,我们提出的网络优于OULP和MVLP数据集上的最新技术。值得注意的是,在MVLP的90°角差下,我们实现了6.4%的rank-1识别精度提高;在OULP的30°角差下,我们实现了3.6%的等级识别精度提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号