首页> 外文期刊>Systems and Control Letters >A learning result for continuous-time recurrent neural networks
【24h】

A learning result for continuous-time recurrent neural networks

机译:连续时间递归神经网络的学习结果

获取原文
获取原文并翻译 | 示例
           

摘要

The following learning problem is considered, for continuous-time recurrent neural networks having sigmoidal activation functions. Given a "black box" representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to approximate the observed behavior. It is shown that the number of inputs needed for reliable generalization (the sample complexity of the learning problem) is upper bounded by an expression that grows polynomially with the dimension of the network and logarithmically with the number of output derivatives being matched. (C) 1998 Elsevier Science B.V. All rights reserved. [References: 21]
机译:对于具有S形激活功能的连续时间递归神经网络,考虑以下学习问题。给定代表未知系统的“黑匣子”,对于一组随机生成的输入,收集输出导数的度量,并使用网络来近似观察到的行为。结果表明,可靠的概括所需的输入数量(学习问题的样本复杂度)受表达式的上限限制,该表达式随着网络的维数呈多项式增长,而对数匹配的输出导数则呈对数增长。 (C)1998 Elsevier Science B.V.保留所有权利。 [参考:21]

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号