首页> 外文会议> >A new feedforward neural network hidden layer neuron pruning algorithm
【24h】

A new feedforward neural network hidden layer neuron pruning algorithm

机译:一种新的前馈神经网络隐层神经元修剪算法

获取原文

摘要

This paper deals with a new approach to detect the structure (i.e. determination of the number of hidden units) of a feedforward neural network (FNN). This approach is based on the principle that any FNN could be represented by a Volterra series such as a nonlinear input-output model. The new proposed algorithm is based on the following three steps: first, we develop the nonlinear activation function of the hidden layer's neurons in a Taylor expansion, secondly we express the neural network output as a NARX (nonlinear autoregressive with exogenous input) model and finally, by appropriately using the nonlinear order selection algorithm proposed by Kortmann-Unbehauen (1988), we select the most relevant signals on the NARX model obtained. Starting from the output layer, this pruning procedure is performed on each node in each layer. Using this new algorithm with the standard backpropagation (SBP) and over various initial conditions, we perform Monte Carlo experiments leading to a drastic reduction in the nonsignificant network hidden layer neurons.
机译:本文探讨了一种检测前馈神经网络(FNN)的结构(即确定隐藏单元数)的新方法。这种方法基于这样的原理,即任何FNN都可以由Volterra级数表示,例如非线性输入输出模型。新提出的算法基于以下三个步骤:首先,我们在泰勒展开中开发隐藏层神经元的非线性激活函数,其次,我们将神经网络输出表示为NARX(带有外生输入的非线性自回归)模型,最后,通过适当地使用由Kortmann-Unbehauen(1988)提出的非线性阶数选择算法,我们在获得的NARX模型中选择了最相关的信号。从输出层开始,对每个层中的每个节点执行此修剪过程。使用这种具有标准反向传播(SBP)的新算法,并在各种初始条件下,我们进行了蒙特卡罗实验,从而导致非重要网络隐藏层神经元的急剧减少。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号