首页> 外文期刊>Neural Networks, IEEE Transactions on >Convergence and Objective Functions of Some Fault/Noise-Injection-Based Online Learning Algorithms for RBF Networks
【24h】

Convergence and Objective Functions of Some Fault/Noise-Injection-Based Online Learning Algorithms for RBF Networks

机译:基于故障/噪声注入的RBF网络在线学习算法的收敛性和目标函数

获取原文
获取原文并翻译 | 示例

摘要

In the last two decades, many online faultoise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common faultoise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other faultoise-injection-based online algorithms contain a mean square error term and a specialized regularization term.
机译:在过去的二十年中,已经开发了许多在线故障/噪声注入算法来获得容错神经网络。然而,关于它们的收敛性和目标功能的理论研究很少。本文研究了用于径向基函数(RBF)网络的六种基于故障/噪声注入的常见在线学习算法,即1)注入加性输入噪声,2)注入加性/乘性权重噪声,3)注入乘性节点噪声,4)注入多权重故障(权重随机断开),5)在训练过程中注入多节点故障,以及6)随着注入多节点故障进行权重衰减。基于Gladyshev定理,我们证明这六个在线算法的收敛性几乎是确定的。此外,推导了将其最小化的真实目标函数。为了在训练过程中注入附加的输入噪声,目标函数与Tikhonov正则化方法的目标函数相同。为了在训练过程中注入加性/乘性权重噪声,目标函数是简单的均方训练误差。因此,在训练过程中注入加性/乘性权重噪声不能提高RBF网络的容错能力。与注入性加性输入噪声相似,其他基于故障/噪声注入的在线算法的目标函数包含均方误差项和专门的正则项。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号