首页> 外文会议>IEEE International Conference on Fuzzy Systems >Feature and decision level fusion using multiple kernel learning and fuzzy integrals
【24h】

Feature and decision level fusion using multiple kernel learning and fuzzy integrals

机译:使用多核学习和模糊积分的特征和决策级融合

获取原文

摘要

Kernel methods for classification is a well-studied area in which data are implicitly mapped from a lower-dimensional space to a higher-dimensional space to improve classification accuracy. However, for most kernel methods, one must still choose a kernel to use for the problem. Since there is, in general, no way of knowing which kernel is the best, multiple kernel learning (MKL) is a technique used to learn the aggregation of a set of valid kernels into a single (ideally) superior kernel. The aggregation can be done using weighted sums of the pre-computed kernels, but determining the summation weights is not a trivial task. A popular and successful approach to this problem is MKL-group lasso (MKLGL), where the weights and classification surface are simultaneously solved by iteratively optimizing a min-max optimization until convergence. In this work, we propose an ℓ-normed genetic algorithm MKL (GAMKL), which uses a genetic algorithm to learn the weights of a set of pre-computed kernel matrices for use with MKL classification. We prove that this approach is equivalent to a previously proposed fuzzy integral aggregation of multiple kernels called fuzzy integral: genetic algorithm (FIGA). A second algorithm, which we call decision-level fuzzy integral MKL (DeFIMKL), is also proposed, where a fuzzy measure with respect to the fuzzy Choquet integral is learned via quadratic programming, and the decision value-viz., the class label-is computed using the fuzzy Choquet integral aggregation. Experiments on several benchmark data sets show that our proposed algorithms can outperform MKLGL when applied to support vector machine (SVM)-based classification.
机译:用于分类的内核方法是一个经过充分研究的领域,其中将数据从低维空间隐式映射到高维空间以提高分类精度。但是,对于大多数内核方法,仍然必须选择一个内核来解决该问题。由于通常没有办法知道哪个内核是最好的,因此多内核学习(MKL)是一种用于学习将一组有效内核聚合为单个(理想)高级内核的技术。可以使用预先计算的内核的加权总和来完成聚合,但是确定总和权重并不是一件容易的事。解决此问题的一种流行且成功的方法是MKL-group套索(MKLGL),其中权重和分类表面通过迭代优化最小-最大优化值直至收敛来同时解决。在这项工作中,我们提出了一种ℓ-范数遗传算法MKL(GAMKL),该算法使用一种遗传算法来学习一组预先计算的核矩阵的权重,以用于MKL分类。我们证明该方法等效于先前提出的多个内核的模糊积分聚合,称为模糊积分:遗传算法(FIGA)。还提出了第二种算法,我们称为决策级模糊积分MKL(DeFIMKL),其中通过二次编程学习关于模糊Choquet积分的模糊测度,并通过决策值viz。,类标签-使用模糊Choquet积分聚合计算。在多个基准数据集上进行的实验表明,当应用于支持向量机(SVM)的分类时,我们提出的算法可以胜过MKLGL。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号