首页> 外文期刊>Neural computation >Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning
【24h】

Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning

机译:对抗特征对齐:避免在渐进式任务终身学习中进行灾难性遗忘

获取原文
获取原文并翻译 | 示例

摘要

Humans are able to master a variety of knowledge and skills with ongoing learning. By contrast, dramatic performance degradation is observed when new tasks are added to an existing neural network model. This phenomenon, termed catastrophic forgetting, is one of the major roadblocks that prevent deep neural networks from achieving human-level artificial intelligence. Several research efforts (e.g., lifelong or continual learning algorithms) have proposed to tackle this problem. However, they either suffer from an accumulating drop in performance as the task sequence grows longer, or require storing an excessive number of model parameters for historical memory, or cannot obtain competitive performance on the new tasks. In this letter, we focus on the incremental multitask image classification scenario. Inspired by the learning process of students, who usually decompose complex tasks into easier goals, we propose an adversarial feature alignment method to avoid catastrophic forgetting. In our design, both the low-level visual features and high-level semantic features serve as soft targets and guide the training process in multiple stages, which provide sufficient supervised information of the old tasks and help to reduce forgetting. Due to the knowledge distillation and regularization phenomena, the proposed method gains even better performance than fine-tuning on the new tasks, which makes it stand out from other methods. Extensive experiments in several typical lifelong learning scenarios demonstrate that our method outperforms the state-of-the-art methods in both accuracy on new tasks and performance preservation on old tasks.
机译:人们能够通过不断学习来掌握各种知识和技能。相比之下,当将新任务添加到现有的神经网络模型时,会观察到性能急剧下降。这种现象被称为灾难性遗忘,是阻止深度神经网络实现人类级人工智能的主要障碍之一。为了解决这个问题,已经提出了一些研究努力(例如,终身学习或持续学习算法)。但是,随着任务序列的延长,它们可能会遭受性能的累积下降,或者需要为历史内存存储过多的模型参数,或者无法在新任务上获得具有竞争力的性能。在这封信中,我们重点介绍增量式多任务图像分类方案。受学生的学习过程启发,他们通常将复杂的任务分解为更容易的目标,我们提出了一种对抗性特征对齐方法,以避免灾难性的遗忘。在我们的设计中,低级视觉特征和高级语义特征均作为软目标,并在多个阶段指导训练过程,从而为旧任务提供了足够的监督信息,并有助于减少遗忘。由于知识的提炼和正则化现象,与对新任务进行微调相比,所提出的方法具有更好的性能,这使其在其他方法中脱颖而出。在几种典型的终身学习场景中进行的大量实验表明,我们的方法在新任务的准确性和旧任务的性能保持方面均优于最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号