首页> 外文期刊>Image and Vision Computing >Attribute saliency network for person re-identification
【24h】

Attribute saliency network for person re-identification

机译:用于人的属性显着性网络重新识别

获取原文
获取原文并翻译 | 示例
           

摘要

This paper proposes the Attribute Saliency Network (ASNet), a deep learning model that utilizes attribute and saliency map learning for person re-identification (re-ID) task. Many re-ID methods used human pose or local body parts, either fixed position or auto-learn, to guide the learning. Person attributes, though can describe a person in greater details, are seldom used in retrieving the person's images. We therefore propose to integrate the person attributes learning into the re-ID model, and let it learns together with the person identity networks. With this arrangement, there is a synergistic effect and thus better representations are encoded. In addition, both visual and text retrievals, such as query by clothing colors, hair length, etc., are possible. We also propose to improve the granularity of the heatmap, by generating two global person attributes and body part saliency maps to capture fine-grained details of the person and thus enhance the discriminative power of the encoded vectors. As a result, we are able to achieve state-of-the-art performances. On the Market1501 dataset, we achieve 90.5% mAP and 96.3% Rank 1 accuracy. On DukeMTMC-reID, we obtained 82.7% mAP and 90.6% Rank 1 accuracy. (c) 2021 Elsevier B.V. All rights reserved.
机译:本文提出了属性显着网络(ASNET),一个利用属性和显着图学习的深层学习模型,用于人员重新识别(RE-ID)任务。许多重新ID方法使用人类姿势或局部身体部位,定位或自动学习,引导学习。人属性,但可以以更详细的细节描述一个人,很少用于检索人的图像。因此,我们建议将学习的人属性集成到RE-ID模型中,并让它与人身份网络一起学习。通过这种布置,存在协同效应,因此编码更好的表示。此外,视觉和文本检索,如衣物颜色,头发长度等的查询也是可能的。我们还建议通过产生两个全球性的人属性和身体部位显着图来提高热图的粒度,以捕获该人的细粒细节,从而提高编码载体的辨别力。结果,我们能够实现最先进的表演。在市场1.5501数据集上,我们达到90.5%地图和96.3%等级1精度。在Dukemtmc-Reid,我们获得了82.7%的地图和90.6%等级1精度。 (c)2021 elestvier b.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号