...
首页> 外文期刊>Journal of visual communication & image representation >Fine-grained-based multi-feature fusion for occluded person re-identification*
【24h】

Fine-grained-based multi-feature fusion for occluded person re-identification*

机译:基于细粒度的多特征融合,用于被遮挡人员的重新识别*

获取原文
获取原文并翻译 | 示例

摘要

Many previous occluded person re-identification(re-ID) methods try to use additional clues (pose estimation or semantic parsing models) to focus on non-occluded regions. However, these methods extremely rely on the performance of additional clues and often capture pedestrian features by designing complex modules. In this work, we propose a simple Fine-Grained Multi-Feature Fusion Network (FGMFN) to extract discriminative features, which is a dual-branch structure consisting of global feature branch and partial feature branch. Firstly, we utilize a chunking strategy to extract multi-granularity features to make the pedestrian information contained in it more comprehensive. Secondly, a spatial transformer network is introduced to localize the pedestrian's upper body, and then introduce a relation-aware attention module to explore the fine-grained information. Finally, we fuse the features obtained from the two branches to obtain a more robust pedestrian representation. Extensive experiments verify the effectiveness of our method under the occlusion scenario.
机译:许多以前的被遮挡人员重新识别(re-ID)方法试图使用额外的线索(姿势估计或语义解析模型)来关注非遮挡区域。然而,这些方法非常依赖于附加线索的性能,并且通常通过设计复杂的模块来捕获行人特征。在这项工作中,我们提出了一种简单的细粒度多特征融合网络(FGMFN)来提取判别特征,这是一个由全局特征分支和部分特征分支组成的双分支结构。首先,利用分块策略提取多粒度特征,使其所包含的行人信息更加全面;其次,引入空间变换器网络对行人上半身进行定位,然后引入关系感知注意力模块对细粒度信息进行挖掘;最后,我们将从两个分支获得的特征融合在一起,以获得更鲁棒的行人表示。大量的实验验证了该方法在遮挡场景下的有效性。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号