首页> 外文期刊>IEEE Transactions on Image Processing >Deep Representation Learning With Part Loss for Person Re-Identification
【24h】

Deep Representation Learning With Part Loss for Person Re-Identification

机译:人员重新识别的零件损失深度代表学习

获取原文
获取原文并翻译 | 示例

摘要

Learning discriminative representations for unseen person images is critical for person re-identification (ReID). Most of the current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations easily get over-fitted on a discriminative human body part on the training set. To gain the discriminative power on unseen person images, we propose a deep representation learning procedure named part loss network, to minimize both the empirical classification risk on training person images and the representation learning risk on unseen person images. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different body parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Market1501, CUHK03, and VIPeR, show that our representation outperforms existing deep representations.
机译:学习看不见人物图像的歧视表现对于人员重新识别(REID)至关重要。大多数目前的方法都学会了分类任务中的深刻陈述,这基本上最大限度地减少了培训集的经验分类风险。如我们实验所示,这些代表很容易在训练集上判别的人体部位过度覆盖。为了获得看不见的人物图像的歧视性,我们提出了一个名为部分损失网络的深度代表性学习程序,以最大限度地减少培训人员图像的经验分类风险以及看不见的人物图像上的代表学习风险。拟议部分损失评估了代表性学习风险,该部分损失自动检测人体部位,并分别计算每个部件的人分类损失。与传统的全球分类损失相比,同时考虑部分损失执行深度网络以学习不同身体部位的陈述,并获得未经看见者的歧视力。在三人Reid数据集上的实验结果,即Market1501,Cuhk03和Viper,表明我们的代表优于现有的深度表示。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号