...
首页> 外文期刊>Journal of testing and evaluation >Iterative Kernel Principal Component for Large-Scale Data Set
【24h】

Iterative Kernel Principal Component for Large-Scale Data Set

机译:大规模数据集的迭代内核主组件

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Kernel principal component analysis (KPCA) is a popular nonlinear feature extraction method that uses eigendecomposition techniques to extract the principal components in the feature space. Most of the existing approaches are not feasible for analyzing large-scale data sets because of extensive storage needs and computation costs. To overcome these disadvantages, an efficient iterative method for computing kernel principal components is proposed. First, the power iteration is used to compute the first eigenvalue and the corresponding eigenvector. Then Schur-Weilandt deflation is repeatedly applied to obtain other higher order eigenvectors. No computation and storage of the kernel matrix is involved in this procedure. Instead, each row of the kernel matrix is calculated sequentially through the iterations. Thus, the kernel principal components can be computed without relying on the traditional eigendecomposition. The space complexity of the proposed method is 0(m), and the time complexity is also greatly reduced. We illustrate the effectiveness of our approach through a series of real data experiments.
机译:核主成分分析(KPCA)是一种流行的非线性特征提取方法,它使用特征分解技术来提取特征空间中的主成分。由于大量的存储需求和计算成本,大多数现有方法对于分析大规模数据集都不可行。为了克服这些缺点,提出了一种计算内核主成分的有效迭代方法。首先,将幂迭代用于计算第一特征值和相应的特征向量。然后重复应用Schur-Weilandt放气以获得其他更高阶的特征向量。此过程不涉及内核矩阵的计算和存储。相反,内核矩阵的每一行都是通过迭代顺序计算的。因此,可以在不依赖传统特征分解的情况下计算内核主成分。该方法的空间复杂度为0(m),时间复杂度也大大降低。我们通过一系列实际数据实验说明了我们方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号