The network architecture presented in this paper maximizes correlations between the activities of the hidden units in order to preserve the internal structure of a given data pattern in the high dimensional space while discounting, to a certain extent, factors that are irrelevant to recognition. Consequently, the method of updating the weights of the network is exclusively Hebbian. Each unit in the hidden layer attempts to match its input-driven bottom-up information, from the preceding layer, with the parameter-based top-down information from the layer above. The parameter-based top-down information effectively eliminates the need to propagate the error derivatives from the top layer to the bottom one. Simulation results, that demonstrate the feasibility of the approach, are also presented.
展开▼