...
首页> 外文期刊>IEEE Transactions on Circuits and Systems for Video Technology >Learning Bidirectional Temporal Cues for Video-Based Person Re-Identification
【24h】

Learning Bidirectional Temporal Cues for Video-Based Person Re-Identification

机译:学习双向时间提示以基于视频的人重新识别

获取原文
获取原文并翻译 | 示例
           

摘要

This paper presents an end-to-end learning architecture for video-based person re-identification by integrating convolutional neural networks (CNNs) and bidirectional recurrent neural networks (BRNNs). Given a video with consecutive frames, features of each frame are extracted with CNN and then are fed into the BRNN to get a final spatio-temporal representation about the video. Specifically, CNN acts as a Spatial Feature Extractor, while BRNN is expected to capture the temporal cues of sequential frames in both forward and backward directions, simultaneously. The whole network is trained end-to-end with a joint identification and verification manner. Experimental results on benchmark data sets show that the proposed model can effectively learn spatio-temporal features relevant for re-identification and outperforms existing video-based person re-identification methods.
机译:本文提出了一种通过集成卷积神经网络(CNN)和双向递归神经网络(BRNN)进行基于视频的人员重新识别的端到端学习体系结构。给定具有连续帧的视频,则使用CNN提取每个帧的特征,然后将其馈入BRNN以获取有关视频的最终时空表示。具体来说,CNN充当空间特征提取器,而BRNN有望同时捕获前后方向上连续帧的时间线索。整个网络采用联合标识和验证方式进行端到端培训。在基准数据集上的实验结果表明,该模型可以有效地学习与重新识别相关的时空特征,并且优于现有的基于视频的人员重新识别方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号