首页> 外文期刊>Multimedia Tools and Applications >Illumination and scale invariant relevant visual features with hypergraph-based learning for multi-shot person re-identification
【24h】

Illumination and scale invariant relevant visual features with hypergraph-based learning for multi-shot person re-identification

机译:通过基于超图的学习来照度和缩放不变的相关视觉特征,以实现多人重新识别

获取原文
获取原文并翻译 | 示例
           

摘要

Person re-identification which aims at matching people across disjoint cameras has received increasing attention due to the widespread use of video surveillance applications. Existing methods concentrate either on robust feature extraction or view-invariant feature transformation. However, the extracted features suffer from various limitations such as color inconsistency and scale variations. Besides, during matching, a probe is compared against each gallery instance which represents only the pairwise relationship and ignores the high order relationship among them. To address these issues, we propose a multi-shot person re-identification framework that first performs a preprocessing task on images to address illumination variations for maintaining the color consistency. Subsequently, we formulate an approach to handle scale variations in the pedestrian appearances for keeping them with relatively a fixed scale ratio. Overlapped visual patches representing appearance cues are then extracted from the processed images. A structured multi-class feature selection approach is employed to select a set of relevant patches that simultaneously discriminates all distinct persons. These selected patches use a hypergraph to represent the visual association among a probe and gallery images. Finally, for matching, we formulate a hypergraph-based learning scheme, which considers both the pairwise and high-order association among the probe and gallery images. The hypergraph structure is then optimized to yield an improved similarity score for a probe against each gallery instance. The effectiveness of our proposed framework is validated on three public datasets and comparison with state-of-the-art methods shows the superior performance of our framework.
机译:由于视频监视应用程序的广泛使用,旨在通过不相交的摄像机匹配人员的人员重新识别受到越来越多的关注。现有方法集中在鲁棒的特征提取或视图不变的特征变换上。但是,提取的特征会受到各种限制,例如颜色不一致和比例变化。此外,在匹配期间,将对每个画廊实例进行比较,该探针仅表示成对关系,而忽略它们之间的高阶关系。为了解决这些问题,我们提出了一个多镜头人员重新识别框架,该框架首先对图像执行预处理任务,以解决照明变化以保持颜色一致性。随后,我们制定了一种方法来处理行人外观中的比例变化,以使其保持相对固定的比例。然后从处理后的图像中提取代表外观提示的重叠视觉补丁。采用结构化的多类别特征选择方法来选择一组可同时区分所有不同人员的相关补丁。这些选定的补丁使用超图来表示探针图像和画廊图像之间的视觉关联。最后,为了匹配,我们制定了一个基于超图的学习方案,该方案考虑了探针图像和画廊图像之间的成对和高阶关联。然后,对超图结构进行优化,以针对每个图库实例生成改进的相似度得分。我们提出的框架的有效性在三个公共数据集上得到了验证,并且与最新方法的比较显示了我们框架的优越性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号