首页> 外文期刊>IEEE Transactions on Circuits and Systems for Video Technology >An Asymmetric Distance Model for Cross-View Feature Mapping in Person Reidentification
【24h】

An Asymmetric Distance Model for Cross-View Feature Mapping in Person Reidentification

机译:用于人员识别的跨视图特征映射的非对称距离模型

获取原文
获取原文并翻译 | 示例
           

摘要

Person reidentification, which matches person images of the same identity across nonoverlapping camera views, becomes an important component for cross-camera-view activity analysis. Most (if not all) person reidentification algorithms are designed based on appearance features. However, appearance features are not stable across nonoverlapping camera views under dramatic lighting change, and those algorithms assume that two cross-view images of the same person can be well represented either by exploring robust and invariant features or by learning matching distance. Such an assumption ignores the nature that images are captured under different camera views with different camera characteristics and environments, and thus, mostly there exists large discrepancy between the extracted features under different views. To solve this problem, we formulate an asymmetric distance model for learning camera-specific projections to transform the unmatched features of each view into a common space where discriminative features across view space are extracted. A cross-view consistency regularization is further introduced to model the correlation between view-specific feature transformations of different camera views, which reflects their nature relations and plays a significant role in avoiding overfitting. A kernel cross-view discriminant component analysis is also presented. Extensive experiments have been conducted to show that asymmetric distance modeling is important for person reidentification, which matches the concerns on cross-disjoint-view matching, reporting superior performance compared with related distance learning methods on six publically available data sets.
机译:在非重叠摄像机视图中匹配相同身份的人图像的人重新识别成为跨摄像机视图活动分析的重要组成部分。大多数(如果不是全部)人员重新识别算法都是基于外观特征设计的。但是,在剧烈的照明变化下,外观特征在不重叠的摄像机视图中不稳定,并且这些算法假定可以通过探索鲁棒且不变的特征或通过学习匹配距离来很好地表示同一人的两个交叉图像。这种假设忽略了以下性质:在具有不同相机特性和环境的不同相机视图下捕获图像,因此,在不同视图下提取的特征之间大多存在很大差异。为了解决这个问题,我们制定了一个不对称距离模型,用于学习特定于摄像机的投影,以将每个视图的不匹配特征转换为一个公共空间,在该公共空间中提取跨视图空间的判别性特征。进一步引入了跨视图一致性正则化以对不同摄影机视图的特定于视图的特征转换之间的相关性进行建模,这反映了它们的本质关系,并且在避免过度拟合方面起着重要作用。还介绍了内核跨视图判别组件分析。已经进行了广泛的实验,表明非对称距离建模对于人的重新识别很重要,这与对交叉脱节视图匹配的关注相匹配,与六个公开可用数据集上的相关远程学习方法相比,报告了优越的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号