首页> 外文期刊>Journal of visual communication & image representation >Exploiting multigranular salient features with hierarchical multi-mode attention network for pedestrian re-IDentification
【24h】

Exploiting multigranular salient features with hierarchical multi-mode attention network for pedestrian re-IDentification

机译:利用具有分层多模式注意力网络的多字体突出功能,用于行人重新识别

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we propose an end-to-end hierarchical-based multi-mode attention network and adaptive fusion (HMAN-HAF) strategy to learn different-level salient features for re-ID tasks. First, according to each layer's characteristics, a hierarchical multi-mode attention network (HMAN) is designed to adopt different attention models for different-level salient feature learning. Specifically, refined channel-wise attention (CA) is adopted to capture high-level valuable semantic information, an attentive region model (AR) is used to detect salient regions in the low layer, and fused attention (FA) is designed to capture the salient regions of valuable channels in the middle layer. Second, a hierarchical adaptive fusion (HAF) is constructed to fulfill the complementary strengths of different-level salient features. Experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on the following challenging benchmarks: Market-1501, DukeMTMC-reID and CUHK03.
机译:在本文中,我们提出了基于端到端的分层的多模式注意网络和自适应融合(HMAN-HMAN-HMA-HAF)策略,以了解RE-ID任务的不同级别突出功能。首先,根据每个层的特性,分层多模式注意力网络(HMAN)旨在采用不同级别突出特征学习的不同关注模型。具体地,采用精细的通道 - 明智的注意(CA)捕获高级有价值的语义信息,用于检测低层中的凸极区域(AR),并且融合(FA)设计用于捕获中间层中有价值通道的显着区域。其次,构建分层自适应融合(HAF)以满足不同级突出特征的互补强度。实验结果表明,该方法在以下具有挑战性的基准上表明了最先进的方法:Market-1501,Dukemtmc-Reid和CuHK03。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号