首页> 外文期刊>Mathematical Problems in Engineering: Theory, Methods and Applications >Pruning Multilayered ELM Using Cholesky Factorization and Givens Rotation Transformation
【24h】

Pruning Multilayered ELM Using Cholesky Factorization and Givens Rotation Transformation

机译:使用 Cholesky 因式分解和给定旋转变换修剪多层 ELM

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Extreme learning machine is originally proposed for the learning of the single hidden layer feedforward neural network to overcome the challenges faced by the backpropagation (BP) learning algorithm and its variants. Recent studies show that ELM can be extended to the multilayered feedforward neural network in which the hidden node could be a subnetwork of nodes or a combination of other hidden nodes. Although the ELM algorithm with multiple hidden layers shows stronger nonlinear expression ability and stability in both theoretical and experimental results than the ELM algorithm with the single hidden layer, with the deepening of the network structure, the problem of parameter optimization is also highlighted, which usually requires more time for model selection and increases the computational complexity. This paper uses Cholesky factorization strategy and Givens rotation transformation to choose the hidden nodes of MELM and obtains the number of nodes more suitable for the network. First, the initial network has a large number of hidden nodes and then uses the idea of ridge regression to prune the nodes. Finally, a complete neural network can be obtained. Therefore, the ELM algorithm eliminates the need to manually set nodes and achieves complete automation. By using information from the previous generation's connection weight matrix, it can be evitable to re-calculate the weight matrix in the network simplification process. As in the matrix factorization methods, the Cholesky factorization factor is calculated by Givens rotation transform to achieve the fast decreasing update of the current connection weight matrix, thus ensuring the numerical stability and high efficiency of the pruning process. Empirical studies on several commonly used classification benchmark problems and the real datasets collected from coal industry show that compared with the traditional ELM algorithm, the pruning multilayered ELM algorithm proposed in this paper can find the optimal number of hidden nodes automatically and has better generalization performance.
机译:极限学习机最初被提出用于学习单隐藏层前馈神经网络,以克服反向传播(BP)学习算法及其变体所面临的挑战。最近的研究表明,ELM可以扩展到多层前馈神经网络,其中隐藏节点可以是节点的子网,也可以是其他隐藏节点的组合。虽然多隐藏层的ELM算法在理论和实验结果上都比单隐藏层的ELM算法表现出更强的非线性表达能力和稳定性,但随着网络结构的深入,参数优化问题也凸显出来,通常需要更多的时间进行模型选择,增加了计算复杂度。该文采用Cholesky因式分解策略和Givens旋转变换,对MELM的隐性节点进行选择,得到更适合网络的节点数。首先,初始网络具有大量的隐藏节点,然后使用岭回归的思想对节点进行修剪。最后,可以得到一个完整的神经网络。因此,ELM算法无需手动设置节点,实现了完全自动化。通过使用来自上一代连接权重矩阵的信息,可以在网络简化过程中重新计算权重矩阵。与矩阵分解方法一样,通过Givens旋转变换计算Cholesky分解因子,实现当前连接权重矩阵的快速递减更新,从而保证了剪枝过程的数值稳定性和高效率。对几个常用的分类基准问题和煤炭行业真实数据集的实证研究表明,与传统的ELM算法相比,本文提出的剪枝多层ELM算法能够自动找到最优的隐藏节点数,具有更好的泛化性能。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号