【24h】

Information-Theoretic Feature Extraction and Selection for Robust Classification

机译:鲁棒分类的信息理论特征提取与选择

获取原文
获取原文并翻译 | 示例

摘要

Classification performance of recognition tasks can be improved by selection of highly discriminative features from the low-dimensional linear representation of data. High-dimensional multivariate data can be represented in lower dimensions by unsupervised feature extraction techniques which attempts to remove the redundancy in the data and/or resolve the multivariate prediction problems. These extracted low-dimensional features of raw data may not ensure good class discrimination, therefore, supervised feature selection methods motivated by information-theoretic approaches can improve the recognition performance with lesser number of features. Proposed hybrid feature selection methods efficiently selects features with higher class discrimination in comparison to feature-class mutual information (MI), Fisher criterion or unsupervised selection using variance; thus, resulting in much improved recognition performance. Feature-class MI criterion and hybrid feature selection methods are computationally scalable and optimal selectors for statistically independent features.
机译:通过从数据的低维线性表示中选择高度区分性的特征,可以提高识别任务的分类性能。高维多元数据可以通过无监督特征提取技术以较低维表示,该技术试图消除数据中的冗余和/或解决多元预测问题。这些提取的原始数据的低维特征可能无法确保良好的类别区分,因此,由信息理论方法激发的监督性特征选择方法可以以较少的特征数来提高识别性能。与特征类互信息(MI),Fisher准则或使用方差的无监督选择相比,提出的混合特征选择方法可有效地选择具有较高类歧视的特征;因此,大大提高了识别性能。特征类MI标准和混合特征选择方法在计算上是可扩展的,并且是统计独立特征的最佳选择器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号