A key component in most parametric classifiers is the estimation of an inverse covariance matrix. In hyperspectral images the number of bands can be in the hundreds leading to covariance matrices having tens of thousands of elements. Lately, the use of general linear regression models in estimating the inverse covariance matrix have been introduced in the time-series literature. This paper adopts and expands these ideas to ill-posed hyperspectral image classification problems. The results indicate that at least some of the approaches can give a lower classification error than traditional methods such as the linear discriminant analysis (LDA) and the regularized discriminant analysis (RDA). Furthermore, the results show that contrary to earlier beliefs, long-range correlation coefficients appear necessary to build an effective hyperspectral classifier, and that the high correlations between neighboring bands seem to allow differing sparsity configurations of the covariance matrix to obtain similar classification results.
展开▼