k Nearest Neighbor is one of the best ten data mining algorithm because of its ease of understand, and great classification performance. In past kNN methods, it takes predefine fixed k-value for all test samples or take different ideal k-value for every test sample. However, in real application, applying kNN method with fixed k-value is unrealistic. Also, kNN methods have different k-values for every test sample take additional time. Thus, Past kNN methods have some limitations. To overcome the issues of kNN methods, in this paper, we work on new kNN methods KTree and K*Tree. In KTree method training stage is added to past kNN method for fast learning an ideal k-value for every test sample. So, KTree method takes less time than past methods, and enhances the accuracy in result. Also, we work on K*Tree method, which is faster than even KTree method. Further we implement kNN with soft clustering, and compared this result with previous kNN methods.
展开▼