We present a simple, computationally efficient recognition algorithm that can systematically extract useful information from any large-dimensional neural datasets. The technique is based on classwise Principal Component Analysis, which employs the distribution characteristics of each class to discard non-informative subspace. We propose a two-step procedure, comprising of removal of sparse non-informative subspace of the large-dimensional data, followed by a linear combination of the data in the remaining subspace to extract meaningful features for efficient classification. Our method produces significant improvement over the standard discriminant analysis based methods. The classification results are given for iEEG and EEG signals recorded from the human brain.
展开▼