To learn things incrementally without the catastrophic interference, we have proposed Resource Allocating Network with Long-Term Memory (RAN-LTM). In RAN-LTM, not only training data but also memory items stored in long-term memory are trained. In this paper, we propose an extended RAN-LTM called Resource Allocating Network by Local Linear Regression (RAN-LLR), in which its centers are not trained but selected based on output errors and the connections are updated by solving a linear regression problem. To reduce the computation and memory costs, the modified connections are restricted based on RBF activity. In the experiments, we first apply RAN-LLR to a one-dimensional function approximation problem to see how the negative interference is effectively suppressed. Then, the performance of RAN-LLR is evaluated for a real-world prediction problem. The experimental results demonstrate that the proposed RAN-LLR can learn fast and accurately with less memory costs compared with the conventional models.
展开▼