首页> 外文期刊>Fuzzy sets and systems >Convergence analysis of the batch gradient-based neuro-fuzzy learning algorithm with smoothing L-1/2 regularization for the first-order Takagi-Sugeno system
【24h】

Convergence analysis of the batch gradient-based neuro-fuzzy learning algorithm with smoothing L-1/2 regularization for the first-order Takagi-Sugeno system

机译:一阶Takagi-Sugeno系统基于平滑L-1 / 2正则化的基于批次梯度的神经模糊学习算法的收敛性分析

获取原文
获取原文并翻译 | 示例
           

摘要

It has been proven that Takagi-Sugeno systems are universal approximators, and they are applied widely to classification and regression problems. The main challenges of these models are convergence analysis and their computational complexity due to the large number of connections and the pruning of unnecessary parameters. The neuro-fuzzy learning algorithm involves two tasks: generating comparable sparse networks and training the parameters. In addition, regularization methods have attracted increasing attention for network pruning, particularly the L-q (0 < q < 1) regularizer after L-1 regularization, which can obtain better solutions to sparsity problems. The L-1/2 regularizer has a specific sparsity capacity and it is representative of L-q (0 < q < 1) regularizations. However, the nonsmoothness of the L-1/2 regularizer may lead to oscillations in the learning process. In this study, we propose a gradient-based neuro-fuzzy learning algorithm with a smoothing L-1/2 regularization for the first-order Takagi-Sugeno fuzzy inference system. The proposed approach has the following three advantages: (i) it enhances the original L-1/2 regularizer by eliminating the oscillation of the gradient in the cost function during the training; (ii) it performs better by pruning inactive connections, where the number of the redundant connections for removal is higher than that generated by the original L-1/2 regularizer, while it is also implemented by simultaneous structure and parameter learning processes; and (iii) it is possible to demonstrate the theoretical convergence analysis of this learning method, which we focus on explicitly. We also provide a series of simulations to demonstrate that the smoothing L-1/2 regularization can often obtain more compressive representations than the current L-1/2 regularization. (C) 2016 Elsevier B.V. All rights reserved.
机译:业已证明,Takagi-Sugeno系统是通用逼近器,并且已广泛应用于分类和回归问题。这些模型的主要挑战是收敛分析及其由于连接数量众多和不必要参数的修剪而导致的计算复杂性。神经模糊学习算法涉及两个任务:生成可比较的稀疏网络和训练参数。另外,正则化方法引起了越来越多的网络修剪关注,尤其是L-1正则化后的L-q(0

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号