首页> 外文期刊>Journal of electronic imaging >Urgent image-to-video person reidentification by cross-media transfer cycle generative adversarial networks
【24h】

Urgent image-to-video person reidentification by cross-media transfer cycle generative adversarial networks

机译:跨媒体传输周期生成对抗网络对图像人的紧急识别

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, image-to-video person reidentification (IVPR) has attracted enormous research interest, and various models are proposed. IVPR is often applied to urgent situations, such as suspect tracking and lost-human locating. Existing IVPR models are under supervised frameworks, which require a large number of labeled image-to-video pairs. This severely limits their real-time efficiency in urgent situations, because annotation is much more time-consuming. To solve the urgent image-to-video person reidentification (UIVPR) problem, we propose a cross-media transfer cycle generative adversarial networks (CTC-GAN) network. Our model aims to alleviate the "media-gap" between image-to-video pairs without newly labeled pairs. We make an existing completely labeled dataset as guidance for CTC-GAN to achieve domain adaptation and make urgent image-to-video matching easier for person reidentification. We introduce cycle GANs for image(video)-to-video (image) translation and extract cross-media features using a triplet constraint in the source domain for different media features. Furthermore, we train the model in the labeled source domain by reconstructing the image (video) as its related video (image). Then, train the model in the unlabeled target domain by reconstructing itself along with source data, so as to ensure that the discriminative model can be used in target domain. Through CTC-GAN, our network can retain pedestrian discriminative information as much as possible, to ensure the matching rate in the target domain. To validate the effectiveness of our approach, we implement substantial experiments on two large-scale person reidentification datasets compared with six existing state-of-the-art unsupervised revised person reidentification models, and experimental results demonstrate that our method can solve UIVPR effectively. (C) 2019 SPIE and IS&T
机译:近年来,图像到视频人物识别(IVPR)引起了巨大的研究兴趣,并且提出了各种模型。 IVPR通常用于紧急情况,例如追踪嫌疑人和寻找失踪人员。现有的IVPR模型是在受监督的框架下进行的,该框架需要大量标记的图像对视频对。这在紧急情况下严重限制了它们的实时效率,因为注释要花费很多时间。为了解决紧急的图像到视频人识别(UIVPR)问题,我们提出了一种跨媒体传输周期生成对抗网络(CTC-GAN)。我们的模型旨在缓解没有新标记的图像对视频对之间的“媒体差距”。我们将现有的完全标记数据集作为CTC-GAN的指南,以实现域自适应,并使紧急图像间匹配更容易进行人员识别。我们介绍了用于图像(视频)到视频(图像)翻译的循环GAN,并使用源域中针对不同媒体特征的三元组约束来提取跨媒体特征。此外,我们通过将图像(视频)重建为相关视频(图像)来在标记的源域中训练模型。然后,通过与源数据一起重建自身,在未标记目标域中训练模型,以确保可在目标域中使用区分模型。通过CTC-GAN,我们的网络可以保留尽可能多的行人识别信息,以确保目标域中的匹配率。为了验证我们的方法的有效性,我们与两个现有的最新的无人监督的修订后的人员识别模型进行了比较,对两个大规模人员识别数据集进行了大量实验,实验结果表明我们的方法可以有效地解决UIVPR。 (C)2019 SPIE和IS&T

著录项

  • 来源
    《Journal of electronic imaging》 |2019年第1期|013052.1-013052.7|共7页
  • 作者

    Yu Benzhi; Xu Ning;

  • 作者单位

    Wuhan Univ Technol, Sch Comp Sci & Technol, Wuhan, Hubei, Peoples R China|Wuhan Univ Technol, Hubei Key Lab Broadband Wireless Commun & Sensor, Wuhan, Hubei, Peoples R China;

    Wuhan Univ Technol, Hubei Key Lab Broadband Wireless Commun & Sensor, Wuhan, Hubei, Peoples R China|Wuhan Univ Technol, Sch Informat Engn, Wuhan, Hubei, Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    unsupervised person reidentification; image-to-video; transfer generative adversarial networks; deep learning;

    机译:无人监督的人识别;图像到视频;转移生成对抗网络;深度学习;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号