首页> 外文会议>Artificial neural nets and genetic algorithms >Principal components identify MLP hidden layer size for optimal generalisation performance
【24h】

Principal components identify MLP hidden layer size for optimal generalisation performance

机译:主要组件标识MLP隐藏层大小以实现最佳泛化性能

获取原文
获取原文并翻译 | 示例

摘要

One of the major concerns when implementing a supervised artificial neural network solution to a classification or prediction problem, is the network's performance on unseen data. The phenomenton of the network overfitting the training data, is understood and reported in the literature. Most researchers recommend a `trial and error' approach to selecting the optimal number of wights for the network, which is time consuming, or start with a large network and prune to an optimal size. Current pruning techniques basedo n approximations of the Hessian matrix of the error surface are computationally intensive and prone to severe approximation errors if a suitable minimal training error has not been achieved.
机译:在实现针对分类或预测问题的有监督的人工神经网络解决方案时,主要关注的问题之一是网络在看不见的数据上的性能。网络中过度拟合训练数据的现象在文献中有所了解和报道。大多数研究人员建议采用“尝试和错误”的方法来为网络选择最佳数量,这很耗时,或者从大型网络开始并修剪到最佳大小。基于误差表面的Hessian矩阵的近似的当前修剪技术在计算上是密集的,并且如果没有实现合适的最小训练误差,则倾向于严重的近似误差。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号