首页> 外文期刊>Pattern recognition letters >Deep hard modality alignment for visible thermal person re-identification
【24h】

Deep hard modality alignment for visible thermal person re-identification

机译:可见热人重新识别的深度硬模型对齐

获取原文
获取原文并翻译 | 示例

摘要

Visible Thermal Person Re-Identification (VTReID) is essentially a cross-modality problem and widely encountered in real night-time surveillance scenarios, which is still in need of vigorous performance improvement. In this work, we design a simple but effective Hard Modality Alignment Network (HMAN) framework to learn modality-robust features. Since current VTReID works do not consider the cross-modality discrepancy imbalance, their models are likely to suffer from the selective alignment behavior. To solve this problem, we propose a novel Hard Modality Alignment (HMA) loss to simultaneously balance and reduce the modality discrepancies. Specifically, we mine the hard feature subspace with large modality discrepancies and abandon the easy feature subspace with small modality discrepancies to make the modality distributions more distinguishable. For mitigating the discrepancy imbalance, we pay more attention on reducing the modality discrepancies of the hard feature subspace than that of the easy feature subspace. Furthermore, we propose to jointly relieve the modality heterogeneity of global and local visual semantics to further boost the cross-modality retrieval performance. This paper experimentally demonstrates the effectiveness of the proposed method, achieving superior performance over the state-of-the-art methods on RegDB and SYSU-MM01 datasets. (C) 2020 Elsevier B.V. All rights reserved.
机译:可见热人重新识别(Vtreid)基本上是一个跨模型问题,并且在真正的夜间监视场景中广泛遇到,仍需要有蓬勃的性能改善。在这项工作中,我们设计了一种简单但有效的硬模态对齐网络(HMAN)框架来学习模态强大的功能。由于目前的vtreid工作不考虑跨模型差异不平衡,因此它们的模型可能遭受选择性对准行为。为了解决这个问题,我们提出了一种新的硬模型对齐(HMA)损失,以同时平衡并降低模态差异。具体而言,我们将硬特征子空间挖掘大型模态差异并放弃具有小模型差异的简单特征子空间,以使模态分布更可区分。为了减轻差异不平衡,我们更加注重降低硬特征子空间的模型差异,而不是简单的特征子空间。此外,我们建议共同缓解全局和本地视觉语义的模态异质性,以进一步提高跨模型检索性能。本文实际证明了所提出的方法的有效性,在RegDB和Sysu-MM01数据集上实现了最先进的方法的卓越性能。 (c)2020 Elsevier B.v.保留所有权利。

著录项

  • 来源
    《Pattern recognition letters》 |2020年第5期|195-201|共7页
  • 作者单位

    Beijing Univ Posts & Telecommun Beijing Key Lab Network Syst & Network Culture Beijing Peoples R China|Beijing Univ Posts & Telecommun Sch Informat & Commun Engn Beijing 100876 Peoples R China;

    Beijing Univ Posts & Telecommun Beijing Key Lab Network Syst & Network Culture Beijing Peoples R China|Beijing Univ Posts & Telecommun Sch Informat & Commun Engn Beijing 100876 Peoples R China;

    Beijing Univ Posts & Telecommun Beijing Key Lab Network Syst & Network Culture Beijing Peoples R China|Beijing Univ Posts & Telecommun Sch Informat & Commun Engn Beijing 100876 Peoples R China;

    Beijing Univ Posts & Telecommun Beijing Key Lab Network Syst & Network Culture Beijing Peoples R China|Beijing Univ Posts & Telecommun Sch Informat & Commun Engn Beijing 100876 Peoples R China;

    China Mobile Res Inst 32 Xuanwumen West St Beijing Peoples R China;

    China Mobile Res Inst 32 Xuanwumen West St Beijing Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Visible thermal erson Re-identification; Deep modality alignment; Hard subspace mining;

    机译:可见的热量erson重新识别;深层模态对齐;硬子空间挖掘;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号