首页> 外文会议>International Conference on Advanced Data Mining and Applications >Supervised Feature Selection by Robust Sparse Reduced-Rank Regression
【24h】

Supervised Feature Selection by Robust Sparse Reduced-Rank Regression

机译:通过强大的稀疏减少秩回归监督特征选择

获取原文

摘要

Feature selection keeping discriminative features (i.e., removing noisy and irrelevant features) from high-dimensional data has been becoming a vital important technique in machine learning since noisy/irrelevant features could deteriorate the performance of classification and regression. Moreover, feature selection has also been applied in all kinds of real applications due to its interpretable ability. Motivated by the successful use of sparse learning in machine learning and reduced-rank regression in statics, we put forward a novel feature selection pattern with supervised learning by using a reduced-rank regression model and a sparsity inducing regularizer during this article. Distinguished from those state-of-the-art attribute selection methods, the present method have described below: (1) built upon an ?2,_P-norm loss function and an ?2,p-norm regularizer by simultaneously considering subspace learning and attribute selection structure into a unite framework; (2) select the more discriminating features in flexible, furthermore, in respect that it may be capable of dominate the degree of sparseness and robust to outlier samples; and (3) also interpretable and stable because it embeds subspace learning (i.e., enabling to output stable models) into the feature selection framework (i.e., enabling to output interpretable results). The relevant results of experiment on eight multi-output data sets indicated the effectiveness of our model compared to the state-of-the-art methods act on regression tasks.
机译:特征选择从高维数据中保持辨别特征(即,除去嘈杂和无关的功能)一直成为机器学习中重要的重要技术,因为嘈杂/无关的功能可能会使分类和回归的性能恶化。此外,由于其可解释能力,特征选择也已应用于各种实际应用。通过在机器学习中的成功使用稀疏学习和估值中的减少级别的速度,我们通过在本文中使用减少秩回归模型和稀疏诱导规范器提出了一种具有监督学习的新颖特征选择模式。与那些最先进的属性选择方法的区别,本方法如下所述:(1)通过同时考虑子空间学习和α2,p-norm规范器,建立在?2,_p-norm丢失函数和α2,p-narm规范器上属性选择结构进入联合框架; (2)在柔性中选择更易于辨别的特征,而且,它可以使其能够将稀疏度和鲁棒统治到异常值样本中; (3)也可以解释和稳定,因为它将子空间学习(即,启用稳定模型输出)进入特征选择框架(即,启用输出可解释结果)。关于八个多输出数据集的实验结果表明我们模型的有效性与最先进的方法对回归任务的行为相比。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号