We address the question of selcting a proper training set for neural network time series prediction or function aproximation. As a result of analyzing the relation between aproximation and generalization, a new measure, the generalization factor, is introduced. Using this factor and cross validation we develop the dynamic pattern selection algorithm. By empolying two time series prediction tasks, we compare the results for dynamic pattern selection training to results obtained with fixed training sets. The favorable properties of the dynamic pattern selection, namely lower computational expense and control of generalization, are demonstrated.
展开▼