首页> 外文会议>International Conference on Advanced Data Mining and Applications >Unsupervised Hypergraph Feature Selection with Low-Rank and Self-Representation Constraints
【24h】

Unsupervised Hypergraph Feature Selection with Low-Rank and Self-Representation Constraints

机译:无监督的超图特征选择,具有低级别和自我表示约束

获取原文

摘要

Unsupervised feature selection is designed to select a subset of informative features from unlabeled data to avoid the issue of 'curse of dimensionality' and thus achieving efficient calculation and storage. In this paper, we integrate the feature-level self-representation property, a low-rank constraint, a hypergraph regularizer, and a sparsity inducing regularizer (i.e., an ?2,1-norm regularizer) in a unified framework to conduct unsupervised feature selection. Specifically, we represent each feature by other features to rank the importance of features via the feature-level self-representation property. We then embed a low-rank constraint to consider the relations among features and a hypergarph regularizer to consider both the high-order relations and the local structure of the samples. We finally use an ?2,1-norm regularizer to result in low-sparsity to output informative features which satisfy the above constraints. The resulting feature selection model thus takes into account both the global structure of the samples (via the low-rank constraint) and the local structure of the data (via the hypergraph regularizer), rather than only considering each of them used in the previous studies. This enables the proposed model more robust than the previous models due to achieving the stable feature selection model. Experimental results on benchmark datasets showed that the proposed method effectively selected the most informative features by removing the adverse effect of redundant/nosiy features, compared to the state-of-the-art methods.
机译:无监督的功能选择旨在选择来自未标记数据的信息特征的子集,以避免“维度诅咒”问题,从而实现有效的计算和存储。在本文中,我们将特征级自我表示属性,低级别约束,超图规范器和稀疏诱导规范器(即,a1-norm规范器)的稀疏诱导程序集成在统一的框架中以进行无监督的功能选择。具体地,我们代表其他功能的每个特征,以通过特征级自我表示属性对特征的重要性进行排序。然后,我们嵌入了一个低级约束,以考虑特征和高级常规器之间的关系,以考虑高阶关系和样品的局部结构。我们终于使用了一个2,1常规规范器来导致低稀疏性以输出满足上述约束的信息功能。因此,所得到的特征选择模型考虑了样本的全局结构(通过低秩约束)和数据的局部结构(通过超图规范器),而不是仅考虑以前研究中使用的每个结构。这使得提出的模型能够比以前的模型更强大,因为达到稳定的特征选择模型。基准数据集的实验结果表明,与最先进的方法相比,所提出的方法通过消除冗余/耐多利特征的不利影响有效地选择了最具信息性的特征。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号