首页> 外文会议>Biological and artificial computation : From neurosciene to technology >A Non-convergent On-Line Training Algorithm for Neural Networks
【24h】

A Non-convergent On-Line Training Algorithm for Neural Networks

机译:神经网络的非收敛在线训练算法

获取原文
获取原文并翻译 | 示例

摘要

Stopped training is a method to avoid over-fitting of neural network models by preventing an iterative optimization mehod from reaching a local minimum of the objective function. It is motivated by the observation that over-fitting occurs gradually as training progresses. The stopping time is typically determined by monitoring the expected generalization performance of the model as approximated by the error on a validation set. In this paper we propose to use an analytic estimate for this purpose. However, these estimates require knowledge of the analytic form of the objective function used for training the network and are only applicable when the weights correspond to a local minimum of this objective function. For this reason, we propose the use of an auxiliary, regularized objective function. The algorithm is "self-contained" and does not require to split the data in a training and a separate validation set.
机译:停止训练是一种通过防止迭代优化方法达到目标函数的局部最小值来避免神经网络模型过度拟合的方法。观察的动机是,过度训练会随着训练的进行而逐渐发生。停止时间通常是通过监视模型的预期泛化性能来确定的,该有效性由验证集上的误差近似得出。在本文中,我们建议为此使用分析估计。但是,这些估计需要了解用于训练网络的目标函数的分析形式,并且仅在权重对应于该目标函数的局部最小值时才适用。因此,我们建议使用辅助的正则化目标函数。该算法是“自包含的”,不需要在训练和单独的验证集中拆分数据。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号