首页> 外文会议>IEEE International Conference on Multimedia and Expo >Attention-Guided Knowledge Distillation for Efficient Single-Stage Detector
【24h】

Attention-Guided Knowledge Distillation for Efficient Single-Stage Detector

机译:高效单级探测器的注意力引导知识蒸馏

获取原文

摘要

Knowledge distillation has been successfully applied in image classification for model acceleration. There are also some works employing this technique to object detection, but they all treat different feature regions equally when performing feature mimic. In this paper, we propose an end-to-end attention-guided knowledge distillation method to train efficient single-stage detectors with much smaller backbones. More specifically, we introduce an attention mechanism to prioritize the transfer of important knowledge by focusing on a sparse set of hard samples, leading to a more thorough distillation process. In addition, the proposed distillation method also provides an easy way to train efficient detectors without tedious ImageNet pre-training procedure. Extensive experiments on PASCAL VOC and CityPersons datasets demonstrate the effectiveness of the proposed approach. We achieve 57.96% and 69.48% mAP on VOC07 with the backbone of 1/8 VGG16 and 1/4 VGG16, greatly outperforming their ImageNet pre-trained counterparts by 11.7% and 7.1% respectively.
机译:知识蒸馏已成功应用于模型加速的图像分类。还有一些采用这种技术对象检测的作品,但是当执行特征模拟时,它们都同样地对待不同的特征区域。在本文中,我们提出了一种端到端的注意力引导知识蒸馏方法,以培训具有更小的骨干的高效单级探测器。更具体地,我们介绍了一种注意机制,通过专注于一组稀疏的硬样品来优先转移重要知识,从而导致更彻底的蒸馏过程。此外,所提出的蒸馏方法还提供了一种在没有繁琐的想象预训练程序的情况下培训有效探测器的简单方法。 Pascal VOC和CityPersons Datasets的广泛实验证明了所提出的方法的有效性。我们在VOC07上达到57.96%和69.48%的地图,带有1/8 VGG16和1/4 VGG16的骨干,极大地表现出他们的想象成预先接受的同行分别为11.7%和7.1%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号