We use sampling methods to analyse the "apparent minima" of the error surfaces of feedforward neural networks learning encoder problems. First and second-order statistics of a sample of these points of attraction are shown to provide qualitative statistical information about the structure of the error surface, allowing a simple description of this structure. Following methods previously used in the analysis of other complex configuration spaces (such as spin glass models and several combinatorial optimization problems), the third-order statistics of the points of attraction are examined and found to be arranged in a highly ultrametric way, using the normal Euclidean distance measure. The implications of this result are discussed.
展开▼