首页> 外文期刊>IEEE Transactions on Fuzzy Systems >Support Vector Learning for Fuzzy Rule-Based Classification Systems
【24h】

Support Vector Learning for Fuzzy Rule-Based Classification Systems

机译:基于模糊规则的分类系统的支持向量学习

获取原文
获取原文并翻译 | 示例
       

摘要

To design a fuzzy rule-based classification system (fuzzy classifier) with good generalization ability in a high dimensional feature space has been an active research topic for a long time. As a powerful machine learning approach for pattern recognition problems, support vector machine (SVM) is known to have good generalization ability. More importantly, an SVM can work very well on a high- (or even infinite) dimensional feature space. This paper investigates the connection between fuzzy classifiers and kernel machines, establishes a link between fuzzy rules and kernels, and proposes a learning algorithm for fuzzy classifiers. We first show that a fuzzy classifier implicitly defines a translation invariant kernel under the assumption that all membership functions associated with the same input variable are generated from location transformation of a reference function. Fuzzy inference on the IF-part of a fuzzy rule can be viewed as evaluating the kernel function. The kernel function is then proven to be a Mercer kernel if the reference functions meet certain spectral requirement. The corresponding fuzzy classifier is named positive definite fuzzy classifier (PDFC). A PDFC can be built from the given training samples based on a support vector learning approach with the IF-part fuzzy rules given by the support vectors. Since the learning process minimizes an upper bound on the expected risk (expected prediction error) instead of the empirical risk (training error), the resulting PDFC usually has good generalization. Moreover, because of the sparsity properties of the SVMs, the number of fuzzy rules is irrelevant to the dimension of input space. In this sense, we avoid the "curse of dimensionality." Finally, PDFCs with different reference functions are constructed using the support vector learning approach. The performance of the PDFCs is illustrated by extensive experimental results. Comparisons with other methods are also provided.
机译:在高维特征空间中设计具有良好泛化能力的基于模糊规则的分类系统(模糊分类器)一直是研究的热点。作为用于模式识别问题的强大的机器学习方法,支持向量机(SVM)具有良好的泛化能力。更重要的是,SVM可以在高(甚至无限)维特征空间上很好地工作。本文研究了模糊分类器与内核机器之间的联系,建立了模糊规则与内核之间的联系,提出了一种模糊分类器的学习算法。我们首先显示,在与同一个输入变量关联的所有隶属函数都是从参考函数的位置转换生成的假设下,模糊分类器隐式定义了平移不变核。可以将对模糊规则的IF部分的模糊推断视为评估核函数。如果参考函数满足某些频谱要求,则该内核函数将被证明是Mercer内核。相应的模糊分类器称为正定模糊分类器(PDFC)。可以基于支持向量学习方法,使用支持向量给出的IF部分模糊规则,从给定的训练样本中构建PDFC。由于学习过程将预期风险(预期预测误差)而不是经验风险(训练误差)的上限最小化,因此生成的PDFC通常具有良好的概括性。此外,由于SVM的稀疏性,模糊规则的数量与输入空间的尺寸无关。从这个意义上讲,我们避免了“维数的诅咒”。最后,使用支持向量学习方法构建具有不同参考功能的PDFC。大量的实验结果说明了PDFC的性能。还提供了与其他方法的比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号