首页> 外文会议>IEEE International Conference on Data Engineering Workshops >Semantic Parsing and Attentive Feature-Temporal Pooling Network for Video-Based Person Image Retrieval
【24h】

Semantic Parsing and Attentive Feature-Temporal Pooling Network for Video-Based Person Image Retrieval

机译:基于视频的人物图像检索的语义解析与分娩功能 - 时间汇集网络

获取原文

摘要

Video person re-identification is a crucial task due to its applications in visual surveillance and human-computer interaction. The purpose of these kinds of algorithms are to search for the corresponding pedestrian image from a large number of cross-device surveillance videos with a given pedestrian image as a probe. In recent years, more and more scholars have begun to regard this problem as a special type of image retrieval. Existing works mainly focus on extracting representative features from the whole image and integrate those features in a sequence through temporal modeling. However, these approaches rarely consider harnessing local visual cues to enhance the power of image-level feature learning. In this paper, we propose a novel neural network which incorporate human semantic parsing to improve imag-elevel representations. Specifically, the human semantic parsing network is able to segment a human image into multiple parts with fine-grained semantics, and the following attentive feature pooling layer can select most significant body parts to enhance the power of feature representations. The carefully designed experiments on two public datasets show the effectiveness of each components of the proposed deep network, improving state-of-the-art video person sequence retrieval on: iLIDS-VID [1] by ~13% and PRID-2011 by ~7% in rank-1.
机译:视频人重新识别是由于其在视觉监控和人机互动中的应用。这些算法的目的是从具有给定行人图像的大量交叉设备监视视频搜索相应的行人图像作为探针。近年来,越来越多的学者已经开始将这个问题视为一种特殊类型的图像检索。现有的作品主要专注于从整个图像中提取代表特征,并通过时间建模在序列中集成这些功能。然而,这些方法很少考虑利用本地视觉提示来增强图像级特征学习的力量。在本文中,我们提出了一种新颖的神经网络,该新型神经网络包括人类语义解析,以改善Imag-Impel表示。具体地,人类语义解析网络能够用细粒度语义将人体图像分成多个部分,并且以下细小的特征池层可以选择大多数重要的身体部位以增强特征表示的功率。在两个公共数据集上精心设计的实验显示了所提出的深网络的每个组件的有效性,改善最先进的视频人序列检索:ILIDS-VID [1]〜13%和PRID-2011 BY〜排名-1中的7%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号