首页> 外文期刊>IEEE Transactions on Image Processing >An End-to-End Foreground-Aware Network for Person Re-Identification
【24h】

An End-to-End Foreground-Aware Network for Person Re-Identification

机译:用于人重新识别的端到端前景感知网络

获取原文
获取原文并翻译 | 示例
       

摘要

Person re-identification is a crucial task of identifying pedestrians of interest across multiple surveillance camera views. For person re-identification, a pedestrian is usually represented with features extracted from a rectangular image region that inevitably contains the scene background, which incurs ambiguity to distinguish different pedestrians and degrades the accuracy. Thus, we propose an end-to-end foreground-aware network to discriminate against the foreground from the background by learning a soft mask for person re-identification. In our method, in addition to the pedestrian ID as supervision for the foreground, we introduce the camera ID of each pedestrian image for background modeling. The foreground branch and the background branch are optimized collaboratively. By presenting a target attention loss, the pedestrian features extracted from the foreground branch become more insensitive to backgrounds, which greatly reduces the negative impact of changing backgrounds on pedestrian matching across different camera views. Notably, in contrast to existing methods, our approach does not require an additional dataset to train a human landmark detector or a segmentation model for locating the background regions. The experimental results conducted on three challenging datasets, i.e. , Market-1501, DukeMTMC-reID, and MSMT17, demonstrate the effectiveness of our approach.
机译:人重新识别是识别多次监控相机视图中识别兴趣行业的关键任务。对于人重新识别,行人通常由从矩形图像区域中提取的特征表示,该特征不可避免地包含场景背景,这会引起歧义以区分不同的行人并降低准确性。因此,我们提出了一个端到端的前景感知网络,通过学习用于人重新识别的软掩码来歧视前景。在我们的方法中,除了作为前景的监督的行人ID之外,我们还介绍了每个行人图像的摄像机ID以进行背景建模。前景分支和背景分支进行协同优化。通过提出目标注意力损失,从前景分支提取的行人特征对背景变得更加不敏感,这大大降低了在不同相机视图上改变背景变化的消极影响。值得注意的是,与现有方法相比,我们的方法不需要额外的数据集来训练人类地标检测器或用于定位背景区域的分段模型。在三个具有挑战性的数据集中进行的实验结果,<斜体xmlns:mml =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink “> IE ,Market-1501,Dukemtmc-Reid和MSMT17,展示了我们方法的有效性。

著录项

  • 来源
    《IEEE Transactions on Image Processing》 |2021年第1期|2060-2071|共12页
  • 作者单位

    CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application System Department of Electronic Engineering and Information Science University of Science and Technology of China Hefei China;

    CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application System Department of Electronic Engineering and Information Science University of Science and Technology of China Hefei China;

    Noah’s Ark Lab Huawei Technologies Company Limited Shenzhen China;

    Huawei Cloud EI Product Department Bellevue WA USA;

    Cloud & AI Huawei Technologies Shenzhen China;

    CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application System Department of Electronic Engineering and Information Science University of Science and Technology of China Hefei China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Feature extraction; Cameras; Data models; Body regions; Training; Visualization; Spatiotemporal phenomena;

    机译:特征提取;摄像机;数据模型;身体区域;培训;可视化;时尚现象;
  • 入库时间 2022-08-18 22:52:47

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号