首页> 外文会议>European conference on computer vision >A Novel Visual Word Co-occurrence Model for Person Re-identification
【24h】

A Novel Visual Word Co-occurrence Model for Person Re-identification

机译:一种新型视觉词的共同发生模型,用于人重新识别

获取原文

摘要

Person re-identification aims to maintain the identity of an individual in diverse locations through different non-overlapping camera views. The problem is fundamentally challenging due to appearance variations resulting from differing poses, illumination and configurations of camera views. To deal with these difficulties, we propose a novel visual word co-occurrence model. We first map each pixel of an image to a visual word using a codebook, which is learned in an unsupervised manner. The appearance transformation between camera views is encoded by a co-occurrence matrix of visual word joint distributions in probe and gallery images. Our appearance model naturally accounts for spatial similarities and variations caused by pose, illumination & configuration change across camera views. Linear SVMs are then trained as classifiers using these co-occurrence descriptors. On the VIPeR and CUHK Campus [2] benchmark datasets, our method achieves 83.86% and 85.49% at rank-15 on the Cumulative Match Characteristic (CMC) curves, and beats the state-of-the-art results by 10.44% and 22.27%.
机译:人重新鉴定目标维持个体的身份在通过不同的非重叠的摄像机视图不同的地点。问题是从根本上是由于具有挑战性的从不同的姿态,照明和摄像机视图的配置产生的外观变化。为了解决这些困难,我们提出了一个新颖的视觉词共现模型。我们首先使用码本,其以无监督的方式学习的图像的每个像素映射到一个视觉词。摄像机视图之间的外观转化是通过在探头和画廊图像视觉词关节分布的共生矩阵编码。我们的外观模型自然占引起的姿态,照明和配置横跨摄像机视图变化的空间的相似性和差异。线性支持向量机被训练然后如使用这些同现描述符分类器。上蝰蛇和大校园[2]的基准数据集,我们的方法实现了在上累积匹配特性(CMC)的曲线,秩15 83.86%和85.49%和10.44%和22.27节拍状态的最先进的结果%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号