首页> 外文期刊>Neurocomputing >Visible-infrared cross-modality person re-identification based on whole-individual training
【24h】

Visible-infrared cross-modality person re-identification based on whole-individual training

机译:可见红外跨型号人根据全个人培训重新识别

获取原文
获取原文并翻译 | 示例
           

摘要

Visible-infrared cross-modality person re-identification (VI-ReID) aims to search person images across cameras of different modalities, which can make up for the problem that ReID cannot be performed through visible images in a dark environment. The difficulty of VI-ReID task is the huge discrepancy between the visible modality and the infrared modality. In this paper, a novel whole-individual training (WIT) model is proposed for VI-ReID, which is based on the idea of pulling in the whole and distinguishing the individuals. Specifically, the model is divided into a whole part and an individual part. Two loss functions are developed in the whole part, namely center maximum mean discrepancy (CMMD) loss and intra-class heterogeneous center (ICHC) loss. Ignoring identity difference and treating each modality as a whole, the CMMD loss pulls in the centers of the two modalities. Ignoring modality difference and treating each identify as a whole, the ICHC loss pulls images with the same identity to its cross-modality center. In the individual part, a cross-modality triplet (CMT) loss is employed, which can distinguish the pedestrian images with different identities. The WIT model can help the network identify pedestrian images in an all-round way. Experiments show that the VI-ReID performance of the proposed method is better than existing technologies on two most popular benchmark datasets SYSU-MM01 and RegDB.(c) 2021 Elsevier B.V. All rights reserved.
机译:可见红外跨型号人重新识别(VI-REID)旨在在不同方式的相机上搜索人物图像,这可以弥补通过在黑暗环境中通过可见图像进行REID进行的问题。 VI-Reid任务的难度是可见的方式和红外模态之间的巨大差异。在本文中,提出了一种新的全个人培训(机智)模型,用于vi-reid,这是基于整个拉动的想法并区分个人。具体地,该模型被分成整个部分和各个部件。在整个部分中开发了两个损失功能,即中心最大均值(CMMD)丢失和类内异质中心(ICHC)损失。忽略身份差异并整体处理每种方式,CMMD损耗在两个模式的中心中拉动。忽略模态差和处理每个识别的整体,ICHC损失将具有与其交叉模态中心相同的标识的图像。在各个部分中,采用横向模态三态(CMT)损耗,这可以将行人图像与不同的身份区分开。 WIT模型可以通过全面的方式帮助网络识别行人图像。实验表明,所提出的方法的VI-REID性能优于现有技术在两个最受欢迎的基准数据集SYSU-MM01和REGDB上的现有技术。(c)2021 Elsevier B.V.保留所有权利。

著录项

  • 来源
    《Neurocomputing》 |2021年第14期|1-11|共11页
  • 作者单位

    Beijing Jiaotong Univ Sch Elect Informat Engn Beijing Peoples R China;

    Beijing Jiaotong Univ Sch Elect Informat Engn Beijing Peoples R China;

    Beijing Jiaotong Univ Sch Elect Informat Engn Beijing Peoples R China;

    Beijing Jiaotong Univ Sch Elect Informat Engn Beijing Peoples R China;

    Beijing Jiaotong Univ Sch Elect Informat Engn Beijing Peoples R China;

    Beijing Jiaotong Univ Sch Elect Informat Engn Beijing Peoples R China|Synth Elect Technol Co Ltd Jinan Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Person re-identification; Cross-modality; Deep learning; Whole-individual training;

    机译:人重新识别;跨模式;深入学习;全个人培训;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号