【24h】

Non-convex approximation based l(0)-norm multiple indefinite kernel feature selection

机译:基于非凸近似的L(0)-norm多个无限内核特征选择

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Multiple kernel learning (MKL) for feature selection utilizes kernels to explore complex properties of features, which has been shown to be among the most effective for feature selection. To perform feature selection, a natural way is to use the l(0)-norm to get sparse solutions. However, the optimization problem involving l(0)-norm is NP-hard. Therefore, previous MKL methods typically utilize a l(1)-norm to get sparse kernel combinations. However, the l(1)-norm, as a convex approximation of l(0)-norm, sometimes cannot attain the desired solution of the l(0)-norm regularizer problem and may lead to prediction accuracy loss. In contrast, various non-convex approximations of l(0)-norm have been proposed and perform better in many linear feature selection methods. In this paper, we propose a novel l(0)-norm based MKL method (l(0)-MKL) for feature selection with non-convex approximations constraint on kernel combination coefficients to select features automatically. Considering the better empirical performance of indefinite kernels than positive kernels, our l(0)-MKL is built on the primal form of multiple indefinite kernel learning for feature selection. The non-convex optimization problem of l(0)-MKL is further refumated as a difference of convex functions (DC) programming and solved by DC algorithm (DCA). Experiments on real-world datasets demonstrate that l(0)-MKL is superior to some related state-of-the-art methods in both feature selection and classification performance.
机译:特征选择的多个内核学习(MKL)利用内核探索复杂的功能属性,这些特性是最有效的特征选择中的最有效的特性。要执行功能选择,自然方式是使用L(0)-Norm来获得稀疏解决方案。但是,涉及L(0)-norm的优化问题是NP-HARD。因此,先前的MKL方法通常利用L(1)-norm来获得稀疏的内核组合。然而,L(1)-norm,作为L(0)-norm的凸起近似,有时不能达到L(0)-norm规范器问题的所需解决方案,并且可能导致预测精度损失。相反,已经提出了L(0)-NORM的各种非凸近似,并且在许多线性特征选择方法中提出更好。在本文中,我们提出了一种新颖的L(0)-NORM基于MKL方法(L(0)-mk1),用于在内核组合系数上具有非凸近似约束来自动选择特征。考虑到无限内核的更好的经验性能而不是正核,我们的L(0)-MKL是基于多个无限内核学习的原始形式,用于特征选择。 L(0)-MK1的非凸优化问题进一步将L(0)-MKL的差异作为DC算法(DCA)的凸起函数(DC)编程和解决的差异。实际数据集的实验证明L(0)-MKL在特征选择和分类性能方面优于一些相关的最先进的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号