首页> 外文会议>International Conference on Artificial Intelligence, Automation and Control Technologies >Two-Stage Backward Elimination Method for Neural Networks Model Reduction
【24h】

Two-Stage Backward Elimination Method for Neural Networks Model Reduction

机译:用于神经网络模型减少的两级向后消除方法

获取原文

摘要

The single-hidden-layer neural networks (NN) has been widely used for complex system identification. However, the hidden neurons are often determined by trial-and-error method and the amount of neurons is usually large. This commonly leads to over-fitting problem and the training process is time consuming. In this paper, we propose a two-stage backward elimination (TSBE) method to obtain a parsimonious network with fewer hidden neurons but remains a good performance and saves training time. In the first stage, neural networks with a predetermined number of hidden neurons is trained based on stochastic gradient decent (SGD) algorithm with part of training data and Least absolute shrinkage and selection operator (Lasso) is applied for dropping redundant neurons leading to a simplified neural model. In the second stage, the remaining training data is used to update the parameters of the simplified neural model. A simulation example is used to validate and show that the novel approach gives a more compressed model and higher level of accuracy comparing with the recently proposed pruning-based method.
机译:单隐藏层神经网络(NN)已被广泛用于复杂的系统识别。然而,隐藏的神经元通常通过试验和误差方法确定,并且神经元的量通常很大。这通常导致过度拟合问题,训练过程是耗时的。在本文中,我们提出了一种两级落后消除(TSBE)方法,以获得具有较少隐藏神经元的解析网络,但仍然是良好的性能并节省培训时间。在第一阶段,基于随机梯度体积(SGD)算法培训具有预定数量的隐性神经元的神经网络,其中包括训练数据的一部分,并且最低绝对收缩和选择操作员(套索)被应用用于丢弃导致简化的冗余神经元神经模型。在第二阶段,剩余的训练数据用于更新简化神经模型的参数。模拟示例用于验证和表明,与最近提出的基于修剪的方法相比,新的方法提供了更压缩的模型和更高的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号