首页> 外文期刊>Journal of intelligent & fuzzy systems: Applications in Engineering and Technology >A two-stream network with joint spatial-temporal distance for video-based person re-identification
【24h】

A two-stream network with joint spatial-temporal distance for video-based person re-identification

机译:一种双流网络,具有联合空间距离的基于视频的人重新识别

获取原文
获取原文并翻译 | 示例
           

摘要

Video-based person re-identification aims to match videos of pedestrians captured by non-overlapping cameras. Video provides spatial information and temporal information. However, most existing methods do not combine these two types of information well and ignore that they are of different importance in most cases. To address the above issues, we propose a two-stream network with a joint distance metric for measuring the similarity of two videos. The proposed two-stream network has several appealing properties. First, the spatial stream focuses on multiple parts of a person and outputs robust local spatial features. Second, a lightweight and effective temporal information extraction block is introduced in video-based person re-identification. In the inference stage, the distance of two videos is measured by the weighted sum of spatial distance and temporal distance. We conduct extensive experiments on four public datasets, i.e., MARS, PRID2011, iLIDS-VID and DukeMTMC-VideoReID to show that our proposed approach outperforms existing methods in video-based person re-ID.
机译:基于视频的人员重新识别旨在匹配非重叠摄像头拍摄的行人视频。视频提供空间信息和时间信息。然而,大多数现有的方法没有很好地结合这两类信息,并且忽略了它们在大多数情况下的不同重要性。为了解决上述问题,我们提出了一种具有联合距离度量的双流网络来度量两个视频的相似性。提出的双流网络有几个吸引人的特性。首先,空间流聚焦于人的多个部分,并输出鲁棒的局部空间特征。其次,在基于视频的人物再识别中引入了一种轻量级、有效的时间信息提取模块。在推理阶段,通过空间距离和时间距离的加权和来测量两个视频的距离。我们在四个公共数据集上进行了大量实验,即MARS、PRID2011、iLIDS VID和DukeMTMC VideoReID,以表明我们提出的方法在基于视频的人物识别方面优于现有方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号