首页> 外文期刊>IEEE Transactions on Circuits and Systems for Video Technology >Nonlinear Discriminant Analysis on Embedded Manifold
【24h】

Nonlinear Discriminant Analysis on Embedded Manifold

机译:嵌入式流形的非线性判别分析

获取原文
获取原文并翻译 | 示例

摘要

Traditional manifold learning algorithms, such as ISOMAP, LLE, and Laplacian Eigenmap, mainly focus on uncovering the latent low-dimensional geometry structure of the training samples in an unsupervised manner where useful class information is ignored. Therefore, the derived low-dimensional representations are not necessarily optimal in discriminative capability. In this paper, we study the discriminant analysis problem by considering the nonlinear manifold structure of data space. To this end, firstly, a new clustering algorithm, called Intra-Cluster Balanced K-Means (ICBKM), is proposed to partition the samples into multiple clusters while ensure that there are balanced samples for the classes within each cluster; approximately, each cluster can be considered as a local patch on the embedded manifold. Then, the local discriminative projections for different clusters are simultaneously calculated by optimizing the global Fisher Criterion based on the cluster weighted data representation. Compared with traditional linear/kernel discriminant analysis (KDA) algorithms, our proposed algorithm has the following characteristics: 1) it essentially is a KDA algorithm with specific geometry-adaptive-kernel tailored to the specific data structure, in contrast to traditional KDA in which the kernel is fixed and independent to the data set; 2) it is approximately a locally linear while globally nonlinear discriminant analyzer; 3) it does not need to store the original samples for computing the low-dimensional representation of a new data; and 4) it is computationally efficient compared with traditional KDA when the sample number is large. The toy problem on artificial data demonstrates the effectiveness of our proposed algorithm in deriving discriminative representations for problems with nonlinear classification hyperplane. The face recognition experiments on YALE and CMU PIE databases show that our proposed algorithm significantly outperforms linear discriminant analysis (LD-A) as well as Mixture LDA, and has higher accuracy than KDA with traditional kernels
机译:传统的流形学习算法,例如ISOMAP,LLE和Laplacian Eigenmap,主要集中在以无监督的方式发现训练样本的潜在低维几何结构,而忽略有用的类信息。因此,派生的低维表示在判别能力上不一定是最佳的。在本文中,我们通过考虑数据空间的非线性流形结构来研究判别分析问题。为此,首先,提出了一种新的聚类算法,称为集群内平衡K均值(ICBKM),以将样本划分为多个集群,同时确保每个集群中的类有均衡的样本。大约,每个群集都可以视为嵌入式歧管上的局部补丁。然后,通过基于聚类加权数据表示优化全局Fisher准则,同时计算不同聚类的局部判别投影。与传统的线性/核判别分析(KDA)算法相比,我们提出的算法具有以下特点:1)与传统的KDA相比,它本质上是针对特定数据结构量身定制的具有特定几何自适应核的KDA算法。内核是固定的,并且独立于数据集; 2)它是近似局部线性而全局非线性的判别分析器; 3)不需要存储原始样本来计算新数据的低维表示; 4)当样本数量较大时,与传统的KDA相比计算效率更高。人工数据上的玩具问题证明了我们提出的算法在导出具有非线性分类超平面问题的判别表示中的有效性。在YALE和CMU PIE数据库上进行的人脸识别实验表明,我们提出的算法明显优于线性判别分析(LD-A)和Mixture LDA,并且比传统内核的KDA具有更高的准确性

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号