【24h】

Learning to Continually Learn

机译:学会不断学习

获取原文

摘要

Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine learning models to catastrophically forget, yet virtually all such work involves manually-designed solutions to the problem. We instead advocate meta-learning a solution to catastrophic forgetting, allowing AI to learn to continually learn. Inspired by neuromodulatory processes in the brain, we propose A Neuromodulated Meta-Learning Algorithm (ANML). It differentiates through a sequential learning process to meta-learn an activation-gating function that enables context-dependent selective activation within a deep neural network. Specifically, a neuromodulatory (NM) neural network gates the forward pass of another (otherwise normal) neural network called the prediction learning network (PLN). The NM network also thus indirectly controls selective plasticity (i.e. the backward pass of) the PLN. ANML enables continual learning without catastrophic forgetting at scale: it produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates).
机译:持续的终身学习需要代理商或模型来学习许多顺序有序的任务,在未经灾难性地忘记之前的知识上建立。很多工作已经走向防止机器学习模型的默认倾向才能灾难性地忘记,但几乎所有这些工作都涉及手动设计解决问题的解决方案。我们改装了Meta学习一个解决方案到灾难性的遗忘,允许AI学会不断学习。通过大脑中的神经调节过程的启发,我们提出了一种神经调节的元学习算法(ANML)。它通过顺序学习过程来区分META-GEART-GERATION函数,该激活 - 门控功能,该函数能够在深神经网络中实现上下文相关的选择性激活。具体地,一种神经调节(NM)神经网络栅极对称为预测学习网络(PLN)的另一个(典型的正常)神经网络的前向通过。因此,NM网络也间接地控制PLN的选择性塑性(即后向通过)。 ANML在规模突出的情况下,不持续学习

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号