Abstract: In a nearest neighbor classifier, an input sample is assigned to the class of the nearest prototype. The decision rule is simple and robust. However, it is computationally expensive in terms of memory space and computer time to implement a nearest neighbor classifier if each training sample is stored as a prototype and used to compare with every testing sample. The performance of the classifier is degraded if only a small number of training samples are used as prototypes. An algorithm is presented in this paper for modifying the prototypes so that the classification rate can be increased. This algorithm makes use of a two-layer perceptron with one second order input. The perceptron is trained and mapped back to a new nearest neighbor classifier. It is shown that the new classifier with only a small number of prototypes can even perform better than the classifier that uses all training samples as prototypes. !14
展开▼