首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >Local-Global Feature for Video-Based One-Shot Person Re-Identification
【24h】

Local-Global Feature for Video-Based One-Shot Person Re-Identification

机译:基于视频的单发人员重新识别的局部全局功能

获取原文

摘要

One-shot video-based re-identification, which uses only one labeled tracklet for each identity, is challenging since the framework usually suffers misalignment and inefficient utilizing of unlabeled data. In this paper we propose a novel local-global progressive learning framework to overcome the limitations. To obtain robust features in a tracklet, we first design sub-networks to learn four discriminative part-based feature maps and one global feature map which is insensitive to misalignment. Then a novel adaptive loss is proposed to balance the part-based and global feature properly. To utilize unlabeled data, our framework gradually select most reliable pseudo-labeled tracklets to the training set for iterative training. Extensive experiments are conducted on two video-based Re-ID datasets, MARS and DukeMTMC-VideoReID. The mAP of our model outperforms the state-of-the-art methods by 20.8% on the DukeMTMC-VideoReID dataset.
机译:单帧基于视频的重新标识(每个标识仅使用一个带标签的小轨迹)具有挑战性,因为该框架通常会出现未对齐和未标记数据利用效率低下的问题。在本文中,我们提出了一种新颖的局部全局渐进式学习框架,以克服这些限制。为了获得小轨迹中的鲁棒特征,我们首先设计子网以学习四个基于区分部分的特征图和一个对失准不敏感的全局特征图。然后提出了一种新的自适应损失,以适当地平衡基于零件的特征和全局特征。为了利用未标记的数据,我们的框架逐渐为训练集选择最可靠的伪标记轨迹,以进行迭代训练。在两个基于视频的Re-ID数据集MARS和DukeMTMC-VideoReID上进行了广泛的实验。在DukeMTMC-VideoReID数据集上,我们模型的mAP优于最新方法的20.8%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号