首页> 外文会议>European Conference on Computer Vision >Circumventing Outliers of AutoAugment with Knowledge Distillation
【24h】

Circumventing Outliers of AutoAugment with Knowledge Distillation

机译:具有知识蒸馏的自动化异常值

获取原文

摘要

AutoAugment has been a powerful algorithm that improves the accuracy of many vision tasks, yet it is sensitive to the operator space as well as hyper-parameters, and an improper setting may degenerate network optimization. This paper delves deep into the working mechanism, and reveals that AutoAugment may remove part of discriminative information from the training image and so insisting on the ground-truth label is no longer the best option. To relieve the inaccuracy of supervision, we make use of knowledge distillation that refers to the output of a teacher model to guide network training. Experiments are performed in standard image classification benchmarks, and demonstrate the effectiveness of our approach in suppressing noise of data augmentation and stabilizing training. Upon the cooperation of knowledge distillation and AutoAugment, we claim the new state-of-the-art on ImageNet classification with a top-1 accuracy of 85.8%.
机译:自动化一直是一种强大的算法,可以提高许多视觉任务的准确性,但它对操作员空间以及超参数敏感,并且设置不当可以退化网络优化。 本文深入了解工作机制,并揭示了自动化可以从训练形象中删除部分辨别信息,因此坚持基础标签不再是最佳选择。 为了减轻监督的不准确性,我们利用知识蒸馏,指的是教师模型的产出来指导网络培训。 实验在标准图像分类基准中进行,并展示了我们在抑制数据增强和稳定训练的噪声时的方法的有效性。 知识蒸馏和自动化合作后,我们在想象中声称新的最先进的分类,前1个精度为85.8%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号