首页> 外文会议>European Conference on Computer Vision >PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning
【24h】

PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning

机译:PODNET:用于小型任务的汇集输出蒸馏增量学习

获取原文

摘要

Lifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning. In this work, we propose PODNet, a model inspired by representation learning. By carefully balancing the compromise between remembering the old classes and learning new ones, PODNet fights catastrophic forgetting, even over very long runs of small incremental tasks - a setting so far unexplored by current works. PODNet innovates on existing art with an efficient spatial-based distillation-loss applied throughout the model and a representation comprising multiple proxy vectors for each class. We validate those innovations thoroughly, comparing PODNet with three state-of-the-art models on three datasets: CIFAR100, ImageNet100, and ImageNet1000. Our results showcase a significant advantage of PODNet over existing art, with accuracy gains of 12.10, 6.51, and 2.85 percentage points, respectively.
机译:终身学习引起了很多关注,但现有的作品仍然努力对抗灾难性的遗忘,积累了长期延伸的渐进学习的知识。在这项工作中,我们提出了一个由代表学习的模型的Podnet。通过仔细平衡记住旧课程和学习新的妥协,帕特网打击灾难性遗忘,即使在很长的小型增量任务中也是如此 - 目前的作品到目前为止无法开发的设置。 PODNET对现有技术的创新,具有在整个模型中应用的有效的基于空间的蒸馏损失和包括每个类的多个代理向量的表示。我们彻底验证了这些创新,将Podnet与三个数据集上有三种最先进的模型进行比较:CIFAR100,ImageNet100和ImageNet1000。我们的结果展示了Podnet上现有艺术的大量优势,分别为12.10,6.51和2.85个百分点的准确性增益。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号