首页> 外文会议>International Joint Conference on Neural Networks >Learning a Domain-Invariant Embedding for Unsupervised Person Re-identification
【24h】

Learning a Domain-Invariant Embedding for Unsupervised Person Re-identification

机译:学习用于非监督人重新识别的领域不变嵌入

获取原文

摘要

Person re-identification (Re-ID) aims at matching images of the same person where images are captured by non-overlapping camera views distributed at different locations. To solve this problem, most recent works require a large pre-labeled dataset for training a deep model. These methods are not always suitable for real-world applications, because the latter often lack labeled data. In order to tackle this drawback, we proposed a novel Domain-Invariant Embedding Network (DIEN) to learn a domain-invariant embedding (DIE) feature by introducing a multi-loss joint learning with Recurrent Top- Down Attention (RTDA) mechanism. Due to the improvement in traditional triplet loss, our proposed model can benefit from both source-domain (labeled) data and target-domain (unlabeled) data. Furthermore, the resulting DIE feature not only has improved class discrimination but also robustness to domain shift. We compared our method with recent competitive algorithms and also evaluated the effectiveness of the proposed modules.
机译:人员重新识别(Re-ID)旨在匹配同一人的图像,其中通过分布在不同位置的不重叠相机视图来捕获图像。为了解决这个问题,最近的工作需要大量的预先标记的数据集来训练深度模型。这些方法并不总是适用于现实世界的应用程序,因为后者通常缺少标记的数据。为了解决这个缺点,我们提出了一种新颖的领域不变嵌入网络(DIEN),通过引入具有递归自上而下注意(RTDA)机制的多损失联合学习来学习领域不变嵌入(DIE)功能。由于传统三重态损失的改善,我们提出的模型可以从源域(标记)数据和目标域(未标记)数据中受益。此外,所得的DIE功能不仅具有改进的类别区分能力,而且还具有对域移位的鲁棒性。我们将我们的方法与最新的竞争算法进行了比较,还评估了所提出模块的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号