首页> 外文会议>European conference on computer vision >End-to-End Incremental Learning
【24h】

End-to-End Incremental Learning

机译:端到端增量学习

获取原文

摘要

Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model-a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.
机译:尽管近年来,由于其最先进的结果而使深度学习方法脱颖而出,但它们仍然遭受灾难性的遗忘,当逐步增加新课程的培训时,总体表现将大大下降。这是由于当前的神经网络体系结构需要整个数据集(包括旧类和新类的所有样本)来更新模型-随着类数量的增加,这种需求变得不可持续。我们使用新数据和仅与旧类样本相对应的一小样本集逐步学习深度神经网络的方法来解决此问题。这是基于损失的一种形式,其中包括保留旧类知识的蒸馏方法,以及学习新类时的交叉熵损失。我们的增量训练是在保持整个框架端到端的同时实现的,即与不具有此类保证的最新方法不同,共同学习数据表示和分类器。我们在CIFAR-100和ImageNet(ILSVRC 2012)图像分类数据集中广泛评估了我们的方法,并显示了最新的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号