首页> 外文会议>IEEE International Symposium on Multiple-Valued Logic >Hierarchical Subspace Learning for Dimensionality Reduction to Improve Classification Accuracy in Large Data Sets
【24h】

Hierarchical Subspace Learning for Dimensionality Reduction to Improve Classification Accuracy in Large Data Sets

机译:分层子空间学习维数减少,以提高大数据集中的分类准确性

获取原文

摘要

Manifold learning is used for dimensionality reduction, with the goal of finding a projection subspace to increase and decrease the inter- and intraclass variances, respectively. However, a bottleneck for subspace learning methods often arises from the high dimensionality of datasets. In this paper, a hierarchical approach is proposed to scale subspace learning methods, with the goal of improving classification in large datasets by a range of 3% to 10%. Different combinations of methods are studied. We assess the proposed method on five publicly available large datasets, for different eigen-value based subspace learning methods such as linear discriminant analysis, principal component analysis, generalized discriminant analysis, and reconstruction independent component analysis. To further examine the effect of the proposed method on various classification methods, we fed the generated result to linear discriminant analysis, quadratic linear analysis, k-nearest neighbor, and random forest classifiers. The resulting classification accuracies are compared to show the effectiveness of the hierarchical approach, reporting results of an average of 5% increase in classification accuracy.
机译:歧管学习用于维度减少,目的是找到投影子空间以增加和减少和跨越横差的差异。然而,子空间学习方法的瓶颈通常来自数据集的高度。在本文中,提出了一种分层方法来扩展子空间学习方法,其目标是将大型数据集的分类提高3%至10%。研究了不同的方法组合。我们在五个公共可用的大型数据集上评估提出的方法,用于不同的基于EIGEN值的子空间学习方法,如线性判别分析,主成分分析,广义判别分析和重建独立分量分析。为了进一步检查所提出的方法对各种分类方法的影响,我们将产生的结果喂给线性判别分析,二次线性分析,K最近邻居和随机林分类器。比较所得到的分类准确性,以显示分层方法的有效性,报告结果平均增加了分类准确性的5%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号