首页> 外文期刊>Neural computation >Improving Generalization Capabilities of Dynamic Neural Networks
【24h】

Improving Generalization Capabilities of Dynamic Neural Networks

机译:提高动态神经网络的泛化能力

获取原文
获取原文并翻译 | 示例
       

摘要

This work addresses the problem of improving the generalization capabilities of continuous recurrent neural networks. The learning task is transformed into an optimal control framework in which the weights and the initial network state are treated as unknown controls. A new learning algorithm based on a variational formulation of Pontrayagin's maximum principle is proposed. Under reasonable assumptions, its convergence is discussed. Numerical examples are given that demonstrate an essential improvement of generalization capabilities after the learning process of a dynamic network.
机译:这项工作解决了提高连续递归神经网络泛化能力的问题。学习任务被转换成最优控制框架,其中权重和初始网络状态被视为未知控件。提出了一种基于庞特拉雅金最大原理变分公式的学习算法。在合理的假设下,讨论其收敛性。数值例子说明了动态网络学习过程后泛化能力的本质提高。

著录项

  • 来源
    《Neural computation》 |2004年第6期|p. 1253-1282|共30页
  • 作者单位

    Institute of Medical Statistics, Computer Sciences and Documentation, Friedrich Schiller University, Jena, Germany;

    Institute of Medical Statistics, Computer Sciences and Documentation, Friedrich Schiller University, Jena, Germany;

    Department of Pediatric Orthopedics, Karl-Franzens-University, Graz, Austria;

    Institute of Medical Statistics, Computer Sciences and Documentation, Friedrich Schiller University, Jena, Germany;

  • 收录信息 美国《科学引文索引》(SCI);美国《化学文摘》(CA);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 人工智能理论;
  • 关键词

  • 入库时间 2022-08-18 02:10:43

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号