首页> 外文期刊>Neurocomputing >Feature self-representation based hypergraph unsupervised feature selection via low-rank representation
【24h】

Feature self-representation based hypergraph unsupervised feature selection via low-rank representation

机译:基于特征自我表示的超图通过低秩表示进行无监督特征选择

获取原文
获取原文并翻译 | 示例

摘要

Dimension reduction methods always catch many attentions, because it could effectively solve the 'curse of dimensionality' problem. In this paper, we propose an unsupervised feature selection method which could efficiently select a subset of informative features from unlabeled data. We integrate the low-rank constraint, hypergraph theory, and the self-representation property of features in a unified framework to conduct unsupervised feature selection. Specifically, we represent each feature by other features to conduct unsupervised feature selection via the feature-level self-representation property. We then embed a low-rank constraint to consider the relations among features. Moreover, a hypergarph regularizer is utilized to consider both the high-order relations and the local structure of the data. This enables the proposed model to take into account both the global structure of the data (via the low-rank constraint) and the local structure of the data (via the hypgergraph regularizer). We use an l(2),(p)-norm regularizer to satisfy the constraints. Therefore, the proposed model is more robust to the previous models due to achieving better feature selection model. Experimental results on benchmark datasets showed that the proposed method effectively selected the most informative features by removing the adverse effect of irrelevant/redundant features, compared to the state-of-the-art methods. (C) 2017 Elsevier B.V. All rights reserved.
机译:降维方法一直备受关注,因为它可以有效解决“维数诅咒”问题。在本文中,我们提出了一种无监督的特征选择方法,该方法可以有效地从未标记的数据中选择信息特征的子集。我们将低秩约束,超图理论和特征的自我表示属性集成在一个统一的框架中,以进行无监督的特征选择。具体来说,我们用其他要素表示每个要素,以通过要素级自我表示属性进行无监督的要素选择。然后,我们嵌入一个低等级约束来考虑要素之间的关系。此外,利用超字形正则化器来考虑数据的高阶关系和局部结构。这使所提出的模型能够同时考虑到数据的全局结构(通过低秩约束)和数据的局部结构(通过hypgergraph正则化器)。我们使用l(2),(p)-范数正则化器来满足约束条件。因此,由于实现了更好的特征选择模型,所提出的模型比以前的模型更健壮。在基准数据集上的实验结果表明,与最新方法相比,该方法通过消除无关/冗余特征的不利影响,有效地选择了最具信息量的特征。 (C)2017 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号