...
首页> 外文期刊>Knowledge-Based Systems >Self-centralized jointly sparse maximum margin criterion for robust dimensionality reduction
【24h】

Self-centralized jointly sparse maximum margin criterion for robust dimensionality reduction

机译:用于稳健维数减少的自集中联合稀疏最大裕度标准

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Linear discriminant analysis (LDA) is among the most popular supervised dimensionality reduction algorithms, which has been largely followed in the fields of pattern recognition and data mining. However, LDA has three major drawbacks. One is the challenge brought by small-sample-size (SSS) problem; second makes it sensitive to outliers due to the use of squared L-2-norms in the scatter loss evaluation; the third is the case that the feature loadings in projection matrix are relatively redundant and there is a risk of overfitting. In this paper, we put forward a novel functional expression for LDA, which combines maximum margin criterion (MMC) with a weighted strategy formulated by L-1,L-2-norms to against outliers. Meanwhile, we simultaneously realize the adaptive calculation of weighted intra-class and global centroid to further reduce the influence of outliers, and employ the L-2,L-1-norm to constrain row sparsity so that subspace learning and feature selection could be performed cooperatively. Besides, an effective alternating iterative algorithm is derived and its convergence is verified. From the complexity analysis, our proposed algorithm can deal with large-scale data processing. Our proposed model can address the sensitivity problem of outliers and extract the most representative features while preventing overfitting effectively. Experiments performed on several benchmark databases demonstrate that the proposed algorithm is more effective than some other state-of-the-art methods and has better generalization performance. (C) 2020 Elsevier B.V. All rights reserved.
机译:线性判别分析(LDA)是最受欢迎的监督维度减少算法之一,这在模式识别和数据挖掘领域已经很大程度上遵循。但是,LDA有三个主要缺点。一个是小型样本大小(SSS)问题所带来的挑战;其次,由于在散射损耗评估中使用平方L-2规范,对异常值敏感;第三是投影矩阵中的特征加载相对冗余的情况,并且存在过度装备的风险。在本文中,我们提出了LDA的新功能表达,其将最大边距标准(MMC)与由L-1,L-2-2-Nums配制的加权策略相结合,以防止异常值。同时,我们同时实现了加权级别和全球质心的自适应计算,以进一步降低异常值的影响,并采用L-2,L-1-NORM来限制行稀疏,从而可以执行子空间学习和特征选择合作。此外,导出有效的交替迭代算法,并验证其收敛。从复杂性分析,我们所提出的算法可以处理大规模的数据处理。我们所提出的模型可以解决异常值的敏感性问题,并提取最具代表性的特征,同时防止有效地过度拟合。在多个基准数据库上执行的实验表明,所提出的算法比其他一些最先进的方法更有效,并且具有更好的泛化性能。 (c)2020 Elsevier B.v.保留所有权利。

著录项

  • 来源
    《Knowledge-Based Systems》 |2020年第28期|106343.1-106343.15|共15页
  • 作者单位

    Nanjing Univ Sci & Technol Sch Comp Sci & Engn Nanjing 210094 Peoples R China;

    Shandong Agr Univ Coll Informat Sci & Engn Tai An 271018 Shandong Peoples R China;

    Chinese Acad Sci Res Ctr Precis Sensing & Control Inst Automat Beijing 100190 Peoples R China|Univ Chinese Acad Sci Beijing 101408 Peoples R China;

    Nanjing Univ Sci & Technol Sch Comp Sci & Engn Nanjing 210094 Peoples R China|Chinese Acad Sci Res Ctr Precis Sensing & Control Inst Automat Beijing 100190 Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Maximum margin criterion; Robustness; Adaptive centroid; L-1,L-2-norm sparsity; Dimensionality reduction;

    机译:最大保证金标准;鲁棒性;自适应质心;L-1;L-2-NOM稀疏性;减少维数;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号