首页> 外文会议>International conference on advanced data mining and applications >Supervised Feature Selection by Robust Sparse Reduced-Rank Regression
【24h】

Supervised Feature Selection by Robust Sparse Reduced-Rank Regression

机译:鲁棒的稀疏降秩回归监督特征选择

获取原文

摘要

Feature selection keeping discriminative features (i.e., removing noisy and irrelevant features) from high-dimensional data has been becoming a vital important technique in machine learning since noisy/irrelevant features could deteriorate the performance of classification and regression. Moreover, feature selection has also been applied in all kinds of real applications due to its interpretable ability. Motivated by the successful use of sparse learning in machine learning and reduced-rank regression in statics, we put forward a novel feature selection pattern with supervised learning by using a reduced-rank regression model and a sparsity inducing regularizer during this article. Distinguished from those state-of-the-art attribute selection methods, the present method have described below: (1) built upon an ℓ_(2,p)-norm loss function and an ℓ_(2,p)-norm regularizer by simultaneously considering subspace learning and attribute selection structure into a unite framework; (2) select the more discriminating features in flexible, furthermore, in respect that it may be capable of dominate the degree of sparseness and robust to outlier samples; and (3) also interpretable and stable because it embeds sub-space learning (i.e., enabling to output stable models) into the feature selection framework (i.e., enabling to output interpretable results). The relevant results of experiment on eight multi-output data sets indicated the effectiveness of our model compared to the state-of-the-art methods act on regression tasks.
机译:从高维数据中保留区分性特征(即去除噪声和不相关特征)的特征选择已成为机器学习中至关重要的重要技术,因为噪声/不相关特征可能会降低分类和回归的性能。而且,特征选择由于其可解释的能力也已被应用于各种实际应用中。基于在机器学习中成功使用稀疏学习以及在静态函数中使用降级回归的成功经验,我们在本文中提出了一种新的特征选择模式,即通过使用降秩回归模型和稀疏诱导正则化程序进行监督学习。与这些最新的属性选择方法不同,本方法描述如下:(1)同时基于ℓ_(2,p)-范数损失函数和ℓ_(2,p)-范数正则化函数将子空间学习和属性选择结构纳入一个统一框架; (2)在灵活性方面选择更具区别性的特征,因为它可能能够支配稀疏程度,并且对异常样本具有鲁棒性; (3)也是可解释且稳定的,因为它将子空间学习(即,能够输出稳定的模型)嵌入到特征选择框架中(即,能够输出可解释的结果)。在八个多输出数据集上进行的实验相关结果表明,与对回归任务进行操作的最新方法相比,我们的模型是有效的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号