The training of multilayer perceptron network starts by giving initial values to the weights. Commonly small random values are used for weight initialization. Then their adjustment is carried out by using some gradient descent based optimization routine such as backpropagation. If the initial weight values happen to be poor then it may take a long time to obtain adequate convergence, or in the worst case the network may get stuck to a poor local minimum. To improve the convergence in the training phase we introduce a maximum covariance method to initialize the weights. The simulation results show that the maximum covariance method is relatively fast to compute and it improves the convergence significantly over the random initialization.
展开▼