In this paper, gradient descent and genetic techniques are used for on-line training of recurrent neural networks. A singular perturbation model for gradient learning of fixed points introduces the problem of the rate of learning, formulated as the relative speed of evolution of the netowrk and the adaptation process, and motivates an analogous study when genetic training is used. The existence of bounds for the rate of learning in order to guarantee convergence is obtained in both gradient and genetic training. Some computer simulations confirm theoretical predictions.
展开▼