A geometrical interpretation of the multilayer perceptron (MLP) is suggested in this paper. Basically, the hidden neurons are considered as the building-blocks for constructing the function with the corresponding weights and biases determining their geometrical shapes and positions. A guideline for architecture selection of MLP is then proposed based upon this interpretation and various prevalent approaches of dealing with the over-fitting problem are also reviewed from this new geometrical interpretation. In particular, the popular regularization methods are studied in detail. Not only the reason why regularization methods are effective to alleviate the over-fitting can be simply explained by the geometrical interpretation, but also a potential problem with regularization is predicted and verified.
展开▼