首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >Unsupervised Person Re-Identification by Deep Asymmetric Metric Embedding
【24h】

Unsupervised Person Re-Identification by Deep Asymmetric Metric Embedding

机译:深度不对称度量嵌入对无监督人员的重新识别

获取原文
获取原文并翻译 | 示例
           

摘要

Person re-identification (Re-ID) aims to match identities across non-overlapping camera views. Researchers have proposed many supervised Re-ID models which require quantities of cross-view pairwise labelled data. This limits their scalabilities to many applications where a large amount of data from multiple disjoint camera views is available but unlabelled. Although some unsupervised Re-ID models have been proposed to address the scalability problem, they often suffer from the view-specific bias problem which is caused by dramatic variances across different camera views, e.g., different illumination, viewpoints and occlusion. The dramatic variances induce specific feature distortions in different camera views, which can be very disturbing in finding cross-view discriminative information for Re-ID in the unsupervised scenarios, since no label information is available to help alleviate the bias. We propose to explicitly address this problem by learning an unsupervised asymmetric distance metric based on cross-view clustering. The asymmetric distance metric allows specific feature transformations for each camera view to tackle the specific feature distortions. We then design a novel unsupervised loss function to embed the asymmetric metric into a deep neural network, and therefore develop a novel unsupervised deep framework named the DEep Clustering-based Asymmetric MEtric Learning (DECAMEL). In such a way, DECAMEL jointly learns the feature representation and the unsupervised asymmetric metric. DECAMEL learns a compact cross-view cluster structure of Re-ID data, and thus help alleviate the view-specific bias and facilitate mining the potential cross-view discriminative information for unsupervised Re-ID. Extensive experiments on seven benchmark datasets whose sizes span several orders show the effectiveness of our framework.
机译:人员重新识别(Re-ID)旨在在不重叠的摄像机视图中匹配身份。研究人员提出了许多有监督的Re-ID模型,这些模型需要大量的交叉视图成对标记数据。这将它们的可扩展性限制为许多应用程序,在这些应用程序中,来自多个不相交相机视图的大量数据可用但未标记。尽管已经提出了一些无监督的Re-ID模型来解决可伸缩性问题,但是它们经常遭受特定于视图的偏差问题,该问题是由不同摄像机视图(例如,不同的照明,视点和遮挡)之间的巨大差异引起的。剧烈的变化会在不同的相机视图中引起特定的特征失真,这在为无人监督的情况下找到Re-ID的跨视图判别信息时可能会非常不安,因为没有可用的标签信息来缓解偏差。我们建议通过学习基于交叉视图聚类的无监督非对称距离度量来明确解决此问题。非对称距离度量允许针对每个摄像机视图进行特定的特征转换,以解决特定的特征失真。然后,我们设计了一种新颖的无监督损失函数,将不对称度量嵌入到深度神经网络中,因此开发了一种新颖的无监督深度框架,称为基于DEep聚类的不对称度量学习(DECAMEL)。这样,DECAMEL可以共同学习特征表示和无监督的非对称度量。 DECAMEL学习了一个紧凑的Re-ID数据跨视图簇结构,从而有助于减轻特定于视图的偏见,并有助于挖掘无监督Re-ID的潜在跨视图判别信息。对七个基准数据集进行了广泛的实验,这些数据集的大小跨越了几个数量级,显示了我们框架的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号