首页> 外文期刊>Neurocomputing >Hashing person re-ID with self-distilling smooth relaxation
【24h】

Hashing person re-ID with self-distilling smooth relaxation

机译:哈希人用自蒸馏光滑放松

获取原文
获取原文并翻译 | 示例

摘要

Person re-identification (re-ID) has made substantial progress in recent years; however, it is still challenging to search for the target person in a short time. Re-ID with deep hashing is a shortcut for that but, limited by the expression of binary code, the performance of the hashing method is not satisfactory. Besides, to further speed up retrieval, researchers tend to reduce the number of feature bits, which will cause more performance degradation. In this paper, we design the attribute-based fast retrieval (AFR), which leverages the attribute prediction of the model trained in a binary classification manner tailormade for hashing. The attribute information is also used to refine the global feature representation by an attribute-guided attention block (AAB). Then, to fully exploit deep feature to generate the hash codes, we propose a binary code learning method, named self-distilling smooth relaxation (SSR). In this method, a simple yet effective regularization is presented to distill the quantized knowledge in the model itself, thus mitigating the lack of semantic guidance in the traditional non-linear relaxations. We manually label attributes for each person in dataset CUHK03 and evaluate our method on four authoritative public benchmarks (Market-1501, Market-1501+500K, CUHK03, and DukeMTMC-reID). The experimental results indicate that with the SSR and AAB, we surpass all the state-of-the-art hashing methods. And compared with reducing the feature bits, the AFR strategy is more effective to save search time. (c) 2021 Elsevier B.V. All rights reserved.
机译:人重新识别(RE-ID)近年来取得了很大的进展;但是,在短时间内搜索目标人仍然挑战。具有深度散列的RE-ID是一个快捷方式,但是,通过二进制代码表达的限制,散列方法的性能并不令人满意。此外,为了进一步加速检索,研究人员倾向于减少特征位的数量,这将导致更好的性能下降。在本文中,我们设计了基于属性的快速检索(AFR),其利用以二进制分类方式训练的模型的属性预测用于散列。属性信息还用于通过属性引导的注意力块(AAB)来优化全局特征表示。然后,要充分利用深度特征来生成哈希代码,我们提出了一种二进制代码学习方法,命名为自蒸馏平滑放松(SSR)。在该方法中,提出了一种简单但有效的正则化以蒸馏在模型本身中的量化知识,从而减轻了传统的非线性松弛中缺乏语义指导。我们手动标记DataSet CUHK03中每个人的属性,并在四个权威公共基准(市场-1501,Market-1501 + 500k,Cuhk03和Dukemtmc-Reid)上评估我们的方法。实验结果表明,通过SSR和AAB,我们超越了所有最先进的散列方法。并与减少特征位相比,AFR策略更有效地节省搜索时间。 (c)2021 elestvier b.v.保留所有权利。

著录项

  • 来源
    《Neurocomputing》 |2021年第30期|111-124|共14页
  • 作者单位

    Xi An Jiao Tong Univ Sch Informat & Commun Engn SMILES LAB Xian 710049 Shaanxi Peoples R China;

    Xi An Jiao Tong Univ Sch Informat & Commun Engn SMILES LAB Xian 710049 Shaanxi Peoples R China;

    Xian JiaotongUniv Sch Software Engn Xian 710049 Shaanxi Peoples R China;

    Xi An Jiao Tong Univ Sch Informat & Commun Engn SMILES LAB Xian 710049 Shaanxi Peoples R China|Xi An Jiao Tong Univ Key Lab Intelligent Networks & Network Secur Minist Educ Xian 710049 Shaanxi Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Person re-ID; Deep hashing; Attribute learning; Knowledge distillation;

    机译:人re-ID;深度哈希;属性学习;知识蒸馏;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号