首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >GAN-Based Pose-Aware Regulation for Video-Based Person Re-Identification
【24h】

GAN-Based Pose-Aware Regulation for Video-Based Person Re-Identification

机译:基于GAN的姿势识别规则,用于基于视频的人员重新识别

获取原文

摘要

Video-based person re-identification deals with the inherent difficulty of matching sequences with different length, unregulated, and incomplete target pose/viewpoint structure. Common approaches operate either by reducing the problem to the still images case, facing a significant information loss, or by exploiting inter-sequence temporal dependencies as in Siamese Recurrent Neural Networks or in gait analysis. However, in all cases, the inter-sequences pose/viewpoint misalignment is considered, and the existing spatial approaches are mostly limited to the still images context. To this end, we propose a novel approach that can exploit more effectively the rich video information, by accounting for the role that the changing pose/viewpoint factor plays in the sequences matching process. In particular, our approach consists of two components. The first one attempts to complement the original pose-incomplete information carried by the sequences with synthetic GAN-generated images, and fuse their features vectors into a more discriminative viewpoint-insensitive embedding, namely Weighted Fusion (WF). Another one performs an explicit pose-based alignment of sequence pairs to promote coherent feature matching, namely Weighted-Pose Regulation (WPR). Extensive experiments on two large video-based benchmark datasets show that our approach outperforms considerably existing methods.
机译:基于视频的人的重新识别处理匹配具有不同长度,不受限制和不完整的目标姿势/视点结构的序列的固有困难。常见方法是通过将问题减少到静止图像的情况,面临重大信息丢失,或者通过利用序列间时间依存关系(如暹罗递归神经网络或步态分析)来进行操作。然而,在所有情况下,都考虑了序列间的姿势/视点未对准,并且现有的空间方法大多限于静态图像上下文。为此,我们提出了一种新颖的方法,通过考虑不断变化的姿势/视点因素在序列匹配过程中扮演的角色,可以更有效地利用丰富的视频信息。特别是,我们的方法包括两个部分。第一个尝试用合成的GAN生成的图像来补充序列所携带的原始姿势不完整信息,并将其特征向量融合到更具区分性的对视点不敏感的嵌入中,即加权融合(WF)。另一个执行序列对的显式基于姿势的比对,以促进相干特征匹配,即加权姿势调整(WPR)。在两个基于视频的大型基准数据集上进行的大量实验表明,我们的方法优于现有方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号