In many pattern classification problems, efficiently learning a suitable low-dimensional representation of high dimensional data is essential. The advantages of linear dimension reduction methods are their simplicity and efficiency. Optimal component analysis (OCA) is a recently proposed linear dimensional reduction method which seeks to optimize the discriminant ability of the nearest neighbor classifier for data classification and labeling. Mathematically, OCA defines an objective function which aims to discriminatively separate data in different classes and an optimal basis is obtained through a stochastic gradient search on the underlying Grassmann manifold. OCA shows good performance in various applications including face recognition, object recognition, and image retrieval. However, a limitation of OCA is that the computational complexity is high, which prevents its wide usage in real applications. In this dissertation, several efficient methods, including two-stage OCA, multi-stage OCA, scalable OCA, and two-stage sphere factor analysis (SFA), have been proposed to cope with this problem and achieve both efficiency and accuracy. Two-stage and multi-stage OCA aim to speed up the OCA search by reducing the dimension of the search space; scalable OCA uses a more efficient gradient updating method to reduce the computational complexity of OCA; two-stage SFA first reduces the search space and then the optimal basis is searched on a simpler geometrical manifold than that of OCA. Furthermore, a sparse OCA method is also proposed by adding sparseness constraints to OCA. Additionally, an application of the efficient OCA methods on rapid classification tree is also presented. Experimental results on face and object classification show these methods achieve efficiency and discrimination simultaneously.
展开▼