Understanding and preventing overfitting is a very important issue in artificial neural network design, implementation, and application. Weigend (1994) reports that the presence and absence of overfitting in neurla networks depends on how the testing error is measured, and that there is no overfitting in terms of the classification error (symbolic-level errors). In this paper, we show that, in terms of the classification error, overfitting does occur for certain representation used to encode the discrete attributes. We design simple Boolean function with clear ratinale, and present experimental results to support our claims. In addition, we report some interesting results on the best generalization ability of networks in terms of their sizes.
展开▼