首页> 外文会议>European Conference on Computer Vision >Discriminability Distillation in Group Representation Learning
【24h】

Discriminability Distillation in Group Representation Learning

机译:集体代表学习中的可怜蒸馏

获取原文

摘要

Learning group representation is a commonly concerned issue in tasks where the basic unit is a group, set, or sequence. Previously, the research community tries to tackle it by aggregating the elements in a group based on an indicator either defined by humans such as the quality and saliency, or generated by a black box such as the attention score. This article provides a more essential and explicable view. We claim the most significant indicator to show whether the group representation can be benefited from one of its element is not the quality or an inexplicable score, but the discriminability w.r.t. the model. We explicitly design the discrimiability using embedded class centroids on a proxy set. We show the discrimiability knowledge has good properties that can be distilled by a light-weight distillation network and can be generalized on the unseen target set. The whole procedure is denoted as discriminability distillation learning (DDL). The proposed DDL can be flexibly plugged into many group-based recognition tasks without influencing the original training procedures. Comprehensive experiments on various tasks have proven the effectiveness of DDL for both accuracy and efficiency. Moreover, it pushes forward the state-of-the-art results on these tasks by an impressive margin.
机译:学习组表示是基本单位是组,集或序列的任务中的一个常识问题。此前,研究群落试图通过基于诸如质量和显着性的人类定义的指标来聚合组中的元素来解决它,或者由诸如注意评分的黑匣子产生。本文提供了更具必要和可解析的视图。我们声称最重要的指标来展示是否可以从其其中一个元素中受益,而不是质量或莫名分数,而是可以辨别性W.R.T.该模型。我们明确地使用在代理集上使用嵌入式质心来设计可怜性。我们表明可怜的知识具有良好的性质,可以通过轻量级蒸馏网络蒸馏,并且可以在看不见的目标集上广泛地推广。整个过程表示为可怜的蒸馏学习(DDL)。建议的DDL可以灵活地插入许多基于组的识别任务,而不会影响原始培训程序。各种任务的综合实验已经证明了DDL的有效性,可实现准确性和效率。此外,它通过令人印象深刻的边缘推动了最先进的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号