首页> 外文会议>International Conference on Machine Learning, Optimization, and Data Science >Optimization of Neural Network Training with ELM Based on the Iterative Hybridization of Differential Evolution with Local Search and Restarts
【24h】

Optimization of Neural Network Training with ELM Based on the Iterative Hybridization of Differential Evolution with Local Search and Restarts

机译:基于本地搜索和重启的差分演进迭代杂交的神经网络训练优化

获取原文

摘要

An Extreme Learning Machine (ELM) performs the training of a single-layer feedforward neural network (SLFN) in less time than the back-propagation algorithm. An ELM defines the input weights and biases of the hidden layer with random values, and then analytically calculates the output weights. The use of random values causes SLFN performance to decrease significantly. The present work carries out the adaptation of three continuous optimization algorithms of high dimensionality (IHDELS, DECC-G and MOS) and compares their performance to each other and with the state-of-the-art method, a memetic algorithm based on differential evolution called M-ELM. The results of the comparison show that IHDELS using a validation model based on retention (Training/Testing) obtains the best results, followed by DECC-G and MOS. All three algorithms obtain better results than M-ELM. The experimentation was carried out on 38 classification problems recognized by the scientific community, while Friedman and Wilcoxon nonparametric statistical tests support the results.
机译:极端学习机(ELM)在比背传播算法的时间内更少的时间执行单层前馈神经网络(SLFN)的训练。 ELM为随机值定义隐藏层的输入权重和偏置,然后分析计算输出权重。随机值的使用使SLFN性能显着降低。本工作执行了三维高维度(IHDEL,DECC-G和MOS)的三种连续优化算法的适应,并将其性能彼此相互比较,并以最先进的方法,一种基于差分演进的膜算法叫M-ELM。比较结果表明,使用基于保留(训练/测试)使用验证模型的Ihdel获得了最佳结果,其次是DECC-G和MOS。所有三种算法都比M-ELM获得更好的结果。该实验是在科学界认可的38个分类问题上进行的,而弗里德曼和威尔科隆非参数统计测试支持结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号