首页> 外文会议>International joint conference on artificial intelligence;IJCAI-11 >ℓ_(2,1) -Norm Regularized Discriminative Feature Selection for Unsupervised Learning
【24h】

ℓ_(2,1) -Norm Regularized Discriminative Feature Selection for Unsupervised Learning

机译:ℓ_(2,1)-无监督学习的范数正则化鉴别特征选择

获取原文

摘要

Compared with supervised learning for feature selection, it is much more difficult to select the discriminative features in unsupervised learning due to the lack of label information. Traditional unsupervised feature selection algorithms usually select the features which best preserve the data distribution, e.g., manifold structure, of the whole feature set. Under the assumption that the class label of input data can be predicted by a linear classifier, we incorporate discriminative analysis and ∂_(2,1)-norm minimization into a joint framework for unsupervised feature selection. Different from existing unsupervised feature selection algorithms, our algorithm selects the most discriminative feature subset from the whole feature set in batch mode. Extensive experiment on different data types demonstrates the effectiveness of our algorithm.
机译:与用于特征选择的监督学习相比,由于缺乏标签信息,在无监督学习中选择区分性特征要困难得多。传统的无监督特征选择算法通常会选择最能保留整个特征集数据分布(例如流形结构)的特征。在可以通过线性分类器预测输入数据的类别标签的假设下,我们将判别分析和∂_(2,1)-范数最小化合并到用于无监督特征选择的联合框架中。与现有的无监督特征选择算法不同,我们的算法以批处理模式从整个特征集中选择最具区分性的特征子集。在不同数据类型上的大量实验证明了我们算法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号