首页> 外文会议>International Conference on Pattern Recognition >Rethinking Experience Replay: a Bag of Tricks for Continual Learning
【24h】

Rethinking Experience Replay: a Bag of Tricks for Continual Learning

机译:重新思考体验重播:一袋持续学习的技巧

获取原文

摘要

In Continual Learning, a Neural Network is trained on a stream of data whose distribution shifts over time. Under these assumptions, it is especially challenging to improve on classes appearing later in the stream while remaining accurate on previous ones. This is due to the infamous problem of catastrophic forgetting, which causes a quick performance degradation when the classifier focuses on learning new categories. Recent literature proposed various approaches to tackle this issue, often resorting to very sophisticated techniques. In this work, we show that naive rehearsal can be patched to achieve similar performance. We point out some shortcomings that restrain Experience Replay (ER) and propose five tricks to mitigate them. Experiments show that ER, thus enhanced, displays an accuracy gain of 51.2 and 26.9 percentage points on the CIFAR-10 and CIFAR-100 datasets respectively (memory buffer size 1000). As a result, it surpasses current state-of-the-art rehearsal-based methods.
机译:在连续学习中,神经网络培训在分布随时间换档的数据流上培训。 在这些假设下,尤其具有挑战性,在稍后在流后面出现的课程,同时留在前一件事。 这是由于灾难性遗忘的臭名昭着的问题,这导致分类器侧重于学习新类别时快速的性能下降。 最近的文献提出了各种解决这个问题的方法,通常诉诸于非常复杂的技术。 在这项工作中,我们表明可以修补天真的排练以实现类似的性能。 我们指出了一些抑制经验重播(ER)并提出五项技巧来减轻他们的缺点。 实验表明,如此增强,分别在CiFar-10和CiFar-100数据集上显示了51.2和26.9个百分点的精度增益(存储器缓冲尺寸1000)。 结果,它超越了基于最先进的排练的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号