We discuss the ensemble learning using K nonlinear simple perceptrons of which an output function is the sign function based on the on-line learning in the finite K case. First, we derive a macroscopic differential equation describing a dynamics of correlation q between the student weight vectors in a general learning algorithm-Second, we apply the equation to the three well-known learning rules, that is the Hebb rule, the Perceptron rule and the AdaTron rule, and solve those numerically. Third, we obtain the generalization error of these ensemble machines using a majority vote of students. As a result, we show that the correlation between the student weight vectors in the AdaTron rule evolves most slowly, and that the AdaTron rule is the most superior among the three learning rules in the framework of the ensemble learning.
展开▼