首页> 外文期刊>Fuzzy Systems, IEEE Transactions on >A New Gradient Descent Approach for Local Learning of Fuzzy Neural Models
【24h】

A New Gradient Descent Approach for Local Learning of Fuzzy Neural Models

机译:模糊神经模型局部学习的新梯度下降方法

获取原文
获取原文并翻译 | 示例
           

摘要

The majority of reported learning methods for Takagi–Sugeno–Kang (TSK) fuzzy neural models to date mainly focus on improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behavior when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, identify the corresponding consequent models which can be directly explained in terms of system behavior, presents a critical step in fuzzy neural modeling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practiced in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. Thus, a new Jacobian matrix is proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg–Marquardt optimization method. Several other interpretability issues regarding the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
机译:迄今为止,大多数报告的针对Takagi-Sugeno-Kang(TSK)模糊神经模型的学习方法主要集中在提高其准确性上。但是,构建可解释的模糊模型的关键设计要求之一是,当汇总所有规则以产生整体系统输出时,每个获得的规则都必须与系统局部行为良好匹配。这是黑盒模型(例如神经网络)的显着特征之一。因此,如何找到理想的模糊分区集,从而确定可以根据系统行为直接解释的相应结果模型,提出了模糊神经建模的关键步骤。提出了一种既考虑规则前提中的非线性参数又考虑规则结果中的线性参数的学习方法。与在两个参数集分别进行优化的领域中广泛采用的常规两阶段优化程序不同,将随后的参数转换为前提参数的从属参数集,从而可以引入新的集成梯度下降学习方法。因此,提出了一个新的雅可比矩阵,并通过使用二阶Levenberg-Marquardt优化方法对其进行了有效计算,以实现成本函数的更精确近似。还讨论了有关模糊神经模型的其他几个可解释性问题,并将其集成到这种新的学习方法中。数值算例说明了模糊神经模型的结果结构和所提出的新算法的有效性,并与一些著名方法的结果进行了比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号