首页> 外文会议>International Symposium on Neural Networks >On the Universal Approximation Theorem of Fuzzy Neural Networks with Random Membership Function Parameters
【24h】

On the Universal Approximation Theorem of Fuzzy Neural Networks with Random Membership Function Parameters

机译:关于随机成员函数参数的模糊神经网络的普遍逼近定理

获取原文

摘要

Lowe proposed that the kernel parameters of a radial basis function (RBF) neural network may first be fixed and the weights of the output layer can then be determined by pseudo-inverse. Jang, Sun, and Mizutani (p.342) pointed out that this type of two-step training methods can also be used in fuzzy neural networks (FNNs). By extensive computer simulations, we demonstrated that an FNN with randomly fixed membership function parameters (FNN-RM) has faster training and better generalization in comparison to the classical FNN. To provide a theoretical basis for the FNN-RM, we present an intuitive proof of the universal approximation ability of the FNN-RM in this paper, based on the orthogonal set theory proposed by Kaminski and Strumillo for RBF neural networks.
机译:Lowe提出,可以首先固定径向基函数(RBF)神经网络的内核参数,并且可以通过伪逆确定输出层的权重。 Jang,Sun和Mizutani(第342页)指出,这种类型的两步训练方法也可以用于模糊神经网络(FNNS)。通过广泛的计算机模拟,我们证明了具有随机固定的成员函数参数(FNN-RM)的FNN具有更快的培训和更好的概括与古典FNN相比。为了为FNN-RM提供理论基础,我们基于Kaminski和Strumillo为RBF神经网络提出的正交组理论,提出了FNN-RM的通用近似能力的直观证明。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号