首页> 外文期刊>Soft Computing - A Fusion of Foundations, Methodologies and Applications >HILK++: an interpretability-guided fuzzy modeling methodology for learning readable and comprehensible fuzzy rule-based classifiers
【24h】

HILK++: an interpretability-guided fuzzy modeling methodology for learning readable and comprehensible fuzzy rule-based classifiers

机译:HILK ++:一种可解释性指导的模糊建模方法,用于学习可读和可理解的基于模糊规则的分类器

获取原文
获取原文并翻译 | 示例

摘要

This work presents a methodology for building interpretable fuzzy systems for classification problems. We consider interpretability from two points of view: (1) readability of the system description and (2) comprehensibility of the system behavior explanations. The fuzzy modeling methodology named as Highly Interpretable Linguistic Knowledge (HILK) is upgraded. Firstly, a feature selection procedure based on crisp decision trees is carried out. Secondly, several strong fuzzy partitions are automatically generated from experimental data for all the selected inputs. For each input, all partitions are compared and the best one according to data distribution is selected. Thirdly, a set of linguistic rules are defined combining the previously generated linguistic variables. Then, a linguistic simplification procedure guided by a novel interpretability index is applied to get a more compact and general set of rules with a minimum loss of accuracy. Finally, partition tuning based on two efficient search strategies increases the system accuracy while preserving the high interpretability. Results obtained in several benchmark classification problems are encouraging because they show the ability of the new methodology for generating highly interpretable fuzzy rule-based classifiers while yielding accuracy comparable to that achieved by other methods like neural networks and C4.5. The best configuration of HILK will depend on each specific problem under consideration but it is important to remark that HILK is flexible enough (thanks to the combination of several algorithms in each modeling stage) to be easily adaptable to a wide range of problems.
机译:这项工作提出了一种为分类问题构建可解释的模糊系统的方法。我们从两个角度考虑可解释性:(1)系统描述的可读性和(2)系统行为解释的可理解性。升级了名为高度可解释语言知识(HILK)的模糊建模方法。首先,执行基于清晰决策树的特征选择过程。其次,从所有选择的输入的实验数据中自动生成几个强模糊分区。对于每个输入,将比较所有分区,并根据数据分布选择最佳分区。第三,结合先前生成的语言变量来定义一组语言规则。然后,以新颖的可解释性指标为指导的语言简化过程被应用,以得到更紧凑,更通用的规则集,并且损失的准确性最小。最后,基于两种有效搜索策略的分区调整可以在保持较高解释性的同时提高系统准确性。在几个基准分类问题中获得的结果令人鼓舞,因为它们显示了新方法能够生成高度可解释的基于模糊规则的分类器,同时产生的精度可与其他方法(如神经网络和C4.5)相媲美。 HILK的最佳配置将取决于所考虑的每个特定问题,但重要的是要指出,HILK具有足够的灵活性(由于在每个建模阶段都结合了几种算法),因此很容易适应各种问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号