首页> 外文期刊>IEEE Transactions on Image Processing >Deep Representation Learning With Part Loss for Person Re-Identification
【24h】

Deep Representation Learning With Part Loss for Person Re-Identification

机译:具有部分损失的深度表示学习,用于人员重新识别

获取原文
获取原文并翻译 | 示例

摘要

Learning discriminative representations for unseen person images is critical for person re-identification (ReID). Most of the current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations easily get over-fitted on a discriminative human body part on the training set. To gain the discriminative power on unseen person images, we propose a deep representation learning procedure named part loss network, to minimize both the empirical classification risk on training person images and the representation learning risk on unseen person images. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different body parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Market1501, CUHK03, and VIPeR, show that our representation outperforms existing deep representations.
机译:学习看不见的人的图像的鉴别表示对于人的重新识别(ReID)至关重要。当前大多数方法都在分类任务中学习深入的表示法,从本质上将训练集上的经验分类风险降到最低。如我们的实验所示,在训练集上,这种表示很容易过度拟合到具有区别性的人体部位。为了获得对看不见的人像的判别力,我们提出了一种称为零件损失网络的深度表示学习程序,以最小化训练人像的经验分类风险和看不见的人像的学习风险。通过建议的零件损失评估表征学习风险,零件损失将自动检测人体部位并分别计算每个零件上的人员分类损失。与传统的全局分类损失相比,同时考虑部分损失会加强深度网络,以学习不同身体部位的表示并获得对看不见的人的判别力。在三人ReID数据集(即Market1501,CUHK03和VIPeR)上的实验结果表明,我们的表示优于现有的深度表示。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号