Catastrophic forgetting is a tough issue when the agent faces the sequential multi-task learning scenario without storing previous task information. It gradually becomes an obstacle to achieve artificial general intelligence which is generally believed to behave like a human with continuous learning capability. In this paper, we propose to utilize the variational Bayesian inference method to overcome catastrophic forgetting. By pruning the neural network according to the mean and variance of weights, parameters are vastly reduced, which mitigates the storage problem of double parameters required in variational Bayesian inference. Based on this lightweight version, autoencoders trained on different tasks are employed to self-adaptively match the corresponding task parameters to tackle sequential multi-task learning problem. We show experimentally on several fundamental datasets that the proposed method can perform substantial improvements without catastrophic forgetting over other classic methods especially in the setting where the probability distributions between tasks present more different.
展开▼