首页> 外文会议>International Conference on Image and Graphics >Learning Cross Camera Invariant Features with CCSC Loss for Person Re-identification
【24h】

Learning Cross Camera Invariant Features with CCSC Loss for Person Re-identification

机译:学习交叉相机不变功能,具有CCSC丢失的人重新识别

获取原文

摘要

Person re-identification (re-ID) is mainly deployed in the multi-camera surveillance scene, which means that learning cross camera invariant features is highly required. In this paper, we propose a novel loss named Cross Camera Similarity Constraint loss (CCSC loss), which makes full use of the camera ID information and the person ID information simultaneously to construct cross camera image pairs and performs cosine similarity constraint on them. The proposed CCSC loss effectively reduces the intra-class variance through forcing the whole network to extract cross camera invariant features, and it can be unified with identification loss in a multi-task manner. Extensive experiments implemented on the standard benchmark datasets including CUHK03, DukeMTMC-reid, Market-1501 and MSMT17 indicate that the proposed CCSC loss can bring a large performance boost on the strong baseline and it is also superior to other metric learning methods such as hard triplet loss and center loss. For instance, on the most challenging dataset CUHK03-Detect, Rank-1 accuracy and mAP are improved by 10.0% and 10.2% than the baseline respectively and simultaneously obtain a comparable performance with the state-of-the-art method.
机译:人员重新识别(RE-ID)主要部署在多摄像机监控场景中,这意味着非常需要学习交叉相机不变功能。在本文中,我们提出了一种名为Cross相机相似性约束损失(CCSC损耗)的新颖损失,其同时可以充分利用相机ID信息和人员ID信息来构建交叉相机图像对并对它们执行余弦相似度约束。所提出的CCSC损失有效地通过强制整个网络来提取跨相机不变特征来减少类内差异,并且可以以多任务方式统一丢失统一。在标准基准数据集上实施的广泛实验,包括CUHK03,Dukemtmc-Reid,Market-1501和MSMT17,表明所提出的CCSC损失可以在强大的基线上带来大的性能提升,并且还优于其他公制学习方法,如硬度等等损失和中心损失。例如,在最有挑战性的数据集CUHK03 - 检测,秩1的精度和地图是由10.0%,比基线10.2%分别提高和同时获得与国家的最先进的方法的相当的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号