首页> 外文期刊>Advances in multimedia >Impostor Resilient Multimodal Metric Learning for Person Reidentification
【24h】

Impostor Resilient Multimodal Metric Learning for Person Reidentification

机译:用于人识别的Impostor弹性多模式度量学习

获取原文
           

摘要

In person reidentification distance metric learning suffers a great challenge from impostor persons. Mostly, distance metrics are learned by maximizing the similarity between positive pair against impostors that lie on different transform modals. In addition, these impostors are obtained from Gallery view for query sample only, while the Gallery sample is totally ignored. In real world, a given pair of query and Gallery experience different changes in pose, viewpoint, and lighting. Thus, impostors only from Gallery view can not optimally maximize their similarity. Therefore, to resolve these issues we have proposed an impostor resilient multimodal metric (IRM3). IRM3 is learned for each modal transform in the image space and uses impostors from both Probe and Gallery views to effectively restrict large number of impostors. Learned IRM3 is then evaluated on three benchmark datasets, VIPeR, CUHK01, and CUHK03, and shows significant improvement in performance compared to many previous approaches.
机译:在人中识别距离度量学习遭受冒名顶替者的巨大挑战。通常,距离度量是通过最大化正对与位于不同变换模态上的冒名顶替者之间的相似性来学习的。此外,这些冒名顶替者仅从库视图中获得,仅用于查询示例,而库示例则被完全忽略。在现实世界中,给定的一对查询和图库会在姿势,视点和照明方面经历不同的变化。因此,仅从画廊角度来看的冒名顶替者无法最佳地最大化其相似性。因此,为解决这些问题,我们提出了冒名顶替的弹性多峰度量(IRM3)。针对图像空间中的每个模态转换学习IRM3,并使用Probe和Gallery视图中的冒名顶替者来有效地限制大量冒名顶替者。然后,在三个基准数据集VIPeR,CUHK01和CUHK03上评估学习的IRM3,与许多以前的方法相比,它显示出显着的性能改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号