...
首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Kernel Reconstruction ICA for Sparse Representation
【24h】

Kernel Reconstruction ICA for Sparse Representation

机译:稀疏表示的内核重构ICA

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Independent component analysis with soft reconstruction cost (RICA) has been recently proposed to linearly learn sparse representation with an overcomplete basis, and this technique exhibits promising performance even on unwhitened data. However, linear RICA may not be effective for the majority of real-world data because nonlinearly separable data structure pervasively exists in original data space. Meanwhile, RICA is essentially an unsupervised method and does not employ class information. Motivated by the success of the kernel trick that maps a nonlinearly separable data structure into a linearly separable case in a high-dimensional feature space, we propose a kernel RICA (kRICA) model to nonlinearly capture sparse representation in feature space. Furthermore, we extend the unsupervised kRICA to a supervised one by introducing a class-driven discrimination constraint, such that the data samples from the same class are well represented on the basis of the corresponding subset of basis vectors. This discrimination constraint minimizes inhomogeneous representation energy and maximizes homogeneous representation energy simultaneously, which is essentially equivalent to maximizing between-class scatter and minimizing within-class scatter at the same time in an implicit manner. Experimental results demonstrate that the proposed algorithm is more effective than other state-of-the-art methods on several datasets.
机译:最近提出了使用软重建成本(RICA)进行独立成分分析的方法,以线性地学习稀疏表示并具有不完整的基础,并且该技术即使在未白化的数据上也显示出令人鼓舞的性能。但是,线性RICA可能不适用于大多数实际数据,因为在原始数据空间中普遍存在非线性可分离的数据结构。同时,RICA本质上是一种无监督的方法,并且不使用类别信息。受内核技巧成功的启发,该技巧将非线性可分离数据结构映射到高维特征空间中的线性可分离案例中,我们提出了一种内核RICA(kRICA)模型来非线性捕获特征空间中的稀疏表示。此外,通过引入类驱动的区分约束,我们将无监督的kRICA扩展为有监督的kRICA,从而可以在基础向量的相应子集的基础上很好地表示相同类别的数据样本。这种区分约束最小化了不均匀表示能量,同时又最大化了均匀表示能量,这实质上等效于以隐式方式同时最大化类间散布和最小化类内散布。实验结果表明,在多个数据集上,该算法比其他最新方法更有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号