首页> 外文会议>IEEE workshop on neural networks for signal processing >Generalization performance of regularized neural network models
【24h】

Generalization performance of regularized neural network models

机译:正则化神经网络模型的泛化性能

获取原文

摘要

Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization normally improves the generalization performance by restricting the model complexity. A formula for the optimal weight decay regularizer is derived. A regularized model may be characterized by an effective number of weights (parameters); however, it is demonstrated that no simple definition is possible. A novel estimator of the average generalization error (called FPER) is suggested and compared to the final prediction error (FPE) and generalized prediction error (GPE) estimators. In addition, comparative numerical studies demonstrate the qualities of the suggested estimator.
机译:建筑优化是神经网络建模的基本问题。最佳架构定义为最小化泛化误差的架构。本文介绍了正规化,完整的神经网络模型的泛化性能的估算。正则化通常通过限制模型复杂性来提高泛化性能。派生了最佳重量衰减规范器的公式。正则化模型可以以有效数量的权重(参数)为特征;但是,证明了没有简单的定义。建议并与最终预测误差(FPE)和广义预测误差(GPE)估计器进行了建议的平均泛化误差(称为FPER)的新颖估计器。此外,比较数值研究表明了建议估计的质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号