首页> 外文期刊>Neurocomputing >A gradient aggregate asymptotical smoothing algorithm for training max-min fuzzy neural networks
【24h】

A gradient aggregate asymptotical smoothing algorithm for training max-min fuzzy neural networks

机译:训练最大最小模糊神经网络的梯度集合渐近平滑算法

获取原文
获取原文并翻译 | 示例
       

摘要

A gradient aggregate asymptotical smoothing algorithm is proposed for training fuzzy neural networks (FNNs), which develops the smoothing algorithm (SA) described in Li et al. (2017). By introducing the framework of asymptotical approximation to the algorithm, any degree of approximation for the error function of max-min FNNs can be obtained by an aggregate smoothing function with a variable precision parameter. The algorithm minimizes a sequence of asymptotically approximate functions using the steepest descent algorithm for solving a nondifferentiable max-min optimization problem of max-min FNNs. The proposed update rule on the precision parameter reconciles the conflict between the high-accuracy approximation and the numerical ill-conditioning. The algorithm is globally convergent under Armijo line search. As shown in the simulation results for three artificial examples and a real-world problem in fault diagnosis, compared with SA, the proposed algorithm can efficiently deal with the numerical oscillations and has better performance. (C) 2019 Elsevier B.V. All rights reserved.
机译:提出了一种梯度集合渐近平滑算法,用于训练模糊神经网络(FNN),该算法发展了Li等人所述的平滑算法(SA)。 (2017)。通过将渐近逼近框架引入该算法,可以通过具有可变精度参数的集合平滑函数来获得最大-最小FNN误差函数的任何逼近度。该算法使用最速下降算法来最小化渐近近似函数序列,以解决最大-最小FNN的不可微的最大-最小优化问题。提出的关于精度参数的更新规则解决了高精度近似与数值病态之间的矛盾。该算法在Armijo线搜索下是全局收敛的。如三个仿真示例的仿真结果以及一个故障诊断中的实际问题所示,与SA相比,该算法可以有效地处理数值振荡并具有更好的性能。 (C)2019 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号