An extended feed-forward algorithm for recurrent connectionist networks ispresented. The algorithm, which works locally in time, is derived both for discrete-in-time networks and for continuous networks. Several standard gradient descent algorithms for connectionist networks, especially the backpropagation algorithm, are mathematically derived as a special case of the general algorithm. The learning algorithm presented in the paper is a superset of gradient descent learning algorithms for multilayer networks, recurrent networks and time-delay networks that allows any combinations of their components. In addition, the paper presents feed-forward approximation procedures for initial activations and external input values. The former one is used for optimizing starting values of the so-called context nodes, the latter one turned out to be very useful for finding spurious input attractors of a trained connectionist network. Finally, the authors compare time, processor and space complexities of the algorithm with backpropagation for an unfolded-in-time network and present some simulation results. (Copyright (c) 1990 GMD.)
展开▼