...
首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >(ML)-L-3: Multi-modality mining for metric learning in person re-Identification
【24h】

(ML)-L-3: Multi-modality mining for metric learning in person re-Identification

机译:(ML)-L-3:用于公制学习的多种式挖掘,重新识别

获取原文
获取原文并翻译 | 示例

摘要

Learning a scene-specific distance metric from labeled data is critical for person re-identification. Most of the earlier works in this area aim to seek a linear transformation of the feature space such that relevant dimensions are emphasized while irrelevant ones are discarded in a global sense. However, when training data exhibit multi-modality transitions, the globally learned metric would deviate from the correct metrics learned from each modality. In this study, we propose a multi-modality mining approach for metric learning ((ML)-L-3) to automatically discover multiple modalities of illumination changes by exploring the shift-invariant property in log-chromaticity space, and then learn a sub-metric for each modality to maximally reduce the bias derived from metric learning model with global sense. The experiments on the challenging VIPeR dataset and the fusion dataset VIPeR&PRID 450S have validated the effectiveness of the proposed method with an average improvement of 2-7% over original baseline methods. (C) 2017 Elsevier Ltd. All rights reserved.
机译:从标记数据学习特定的场景距离度量对于人重新识别至关重要。在该领域的大多数早期作品旨在寻求特征空间的线性变换,使得强调相关尺寸,同时在全局意义上丢弃无关紧要的尺寸。但是,当培训数据表现出多种方式转换时,全球学习的度量标准将偏离来自每个模态的正确度量。在本研究中,我们提出了一种用于度量学习((ML)-L-3)的多模态挖掘方法,以通过探索日志色度空间中的换档不变性,自动发现多种方式的照明变化模式,然后学习子对于每个模态进行更可测量,以最大地减少具有全局意义的度量学习模型的偏差。在挑战Viper数据集和融合数据集Viper和PRID 450S上的实验已经验证了所提出的方法的有效性,平均提高了2-7%的原始基线方法。 (c)2017 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号