首页> 外文会议>International Conference on Algorithms and Architectures for Parallel Processing >Feature Selection Method Based on Feature's Classification Bias and Performance
【24h】

Feature Selection Method Based on Feature's Classification Bias and Performance

机译:基于特征分类偏见和性能的特征选择方法

获取原文

摘要

Feature selection is one of the most important dimension reduction technologies for big data issues. Common feature evaluation criteria measure feature with a global score. There are two shortcomings of this strategy, i.e., partially predominant features submerged and classification redundancies for multi-features estimated inappropriately. In this paper, a new feature selection method based on Classification Bias and Classification Performance is proposed. Classification bias describes feature's inclination for recognizing instances to each class. And classification performance measures the consistency of classification bias with prior classification information for assessing feature's classification ability. Thus feature is reconstructed as classification performance vector. This vectorization representation of feature's classification ability is beneficial for those partially predominant features winning out. Also multi-features' classification redundancies are easily measured in the new space. Experimental results demonstrate the effectiveness of the new method on both low-dimensional and high-dimensional data.
机译:特征选择是大数据问题最重要的尺寸减少技术之一。共同特征评估标准测量功能具有全局分数。这一策略有两个缺点,即部分主要的特征淹没以及多个功能的分类冗余估计不当。本文提出了一种基于分类偏差和分类性能的新特征选择方法。分类偏见描述了特征的倾向,用于识别每个类的实例。分类性能测量分类偏差的一致性,以便评估特征的分类能力。因此,特征被重建为分类性能向量。这种传感化表示特征的分类能力对赢得胜利的部分主要特征是有益的。在新空间中,也可以轻松衡量多功能的分类冗余。实验结果表明了新方法对低维和高维数据的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号