首页> 外文期刊>IEEE Transactions on Fuzzy Systems >FINGRAMS: Visual Representations of Fuzzy Rule-Based Inference for Expert Analysis of Comprehensibility
【24h】

FINGRAMS: Visual Representations of Fuzzy Rule-Based Inference for Expert Analysis of Comprehensibility

机译:FINGRAMS:基于模糊规则的推理的可视表示形式,用于可理解性的专家分析

获取原文
获取原文并翻译 | 示例
           

摘要

Since Zadeh’s proposal and Mamdani’s seminal ideas, interpretability is acknowledged as one of the most appreciated and valuable characteristics of fuzzy system identification methodologies. It represents the ability of fuzzy systems to formalize the behavior of a real system in a human understandable way, by means of a set of linguistic variables and rules with a high semantic expressivity close to natural language. Interpretability analysis involves two main points of view: readability of the knowledge base description (regarding complexity of fuzzy partitions and rules) and comprehensibility of the fuzzy system (regarding implicit and explicit semantics embedded in fuzzy partitions and rules, as well as the fuzzy reasoning method). Readability has been thoroughly treated by many authors who have proposed several criteria and metrics. Unfortunately, comprehensibility has usually been neglected because it involves some cognitive aspects related to human reasoning, which are very hard to formalize and to deal with. This paper proposes the creation of a new paradigm for fuzzy system comprehensibility analysis based on fuzzy systems’ inference maps, so-called fuzzy inference-grams (fingrams), by analogy with scientograms used for visualizing the structure of science. Fingrams show graphically the interaction between rules at the inference level in terms of co-fired rules, i.e., rules fired at the same time by a given input. The analysis of fingrams offers many possibilities: measuring the comprehensibility of fuzzy systems, detecting redundancies and/or inconsistencies among fuzzy rules, identifying the most significant rules, etc. Some of these capabilities are explored in this study for the case of fuzzy models and classifiers.
机译:自从Zadeh的提议和Mamdani的开创性思想以来,可解释性被认为是模糊系统识别方法中最受赞赏和最有价值的特征之一。它表示模糊系统通过一组具有接近自然语言的高语义表达能力的语言变量和规则,以人类可理解的方式将实际系统的行为形式化的能力。可解释性分析涉及两个主要观点:知识库描述的可读性(关于模糊分区和规则的复杂性)和模糊系统的可理解性(关于模糊分区和规则中嵌入的隐式和显式语义以及模糊推理方法) )。许多作者已经提出了一些标准和度量标准,对可读性进行了彻底的处理。不幸的是,可理解性通常被忽略,因为它涉及与人类推理相关的一些认知方面,这些方面很难形式化和处理。本文提出了一种基于模糊系统推理图的模糊系统可理解性分析的新范式,即所谓的模糊推理图(fingrams),类似于用于科学结构可视化的科学图。 Fingrams以共激发规则(即,由给定输入同时激发的规则)的形式,在推理级别上以图形方式显示了规则之间的交互。 fingram的分析提供了许多可能性:测量模糊系统的可理解性,检测模糊规则之间的冗余和/或不一致,识别最重要的规则等。在本研究中,针对模糊模型和分类器,探讨了其中一些功能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号