During the training of self-organizing maps (SOMs), there is a conflict between the twin goals of topology preservation between input and output and the minimization of quantization error (QE). This is especially obvious when the dimension of theinput data (the dimension of the codebook vectors) is higher than the dimension of the output network (the dimension of the map grid). The standard SOM training algorithm usually achieves a reasonable balance between the two requirements but, in the end,the need for a low QE overrides the desire for optimal topology preservation. However, one can easily think of applications for which topology preservation should be given relatively greater weight than the standard algorithm allows.This paper describes three modifications to the incremental SOM learning algorithm that enhance its ability to preserve topological relationships without increasing the dimensionality of the network, but usually necessarily at the expense of QE.Experiments are described which demonstrate the new algorithms and compare their performance to that of the standard SOM training.
展开▼