首页> 外文期刊>IEEE Transactions on Neural Networks >Convergent on-line algorithms for supervised learning in neural networks
【24h】

Convergent on-line algorithms for supervised learning in neural networks

机译:用于神经网络中监督学习的收敛在线算法

获取原文
获取原文并翻译 | 示例

摘要

We define online algorithms for neural network training, based on the construction of multiple copies of the network, which are trained by employing different data blocks. It is shown that suitable training algorithms can be defined, in a way that the disagreement between the different copies of the network is asymptotically reduced, and convergence toward stationary points of the global error function can be guaranteed. Relevant features of the proposed approach are that the learning rate must be not necessarily forced to zero and that real-time learning is permitted.
机译:基于网络的多个副本的构建,我们定义了用于神经网络训练的在线算法,这些副本通过使用不同的数据块进行训练。结果表明,可以定义合适的训练算法,从而逐渐减少网络的不同副本之间的分歧,并可以确保向全局误差函数的固定点收敛。所提出的方法的相关特征是,学习率不一定必须强制为零,并且允许实时学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号