首页> 美国政府科技报告 >General Feed-Forward Algorithm for Gradient Descent in Connectionist Networks
【24h】

General Feed-Forward Algorithm for Gradient Descent in Connectionist Networks

机译:连接网络中梯度下降的一般前馈算法

获取原文

摘要

An extended feed-forward algorithm for recurrent connectionist networks ispresented. The algorithm, which works locally in time, is derived both for discrete-in-time networks and for continuous networks. Several standard gradient descent algorithms for connectionist networks, especially the backpropagation algorithm, are mathematically derived as a special case of the general algorithm. The learning algorithm presented in the paper is a superset of gradient descent learning algorithms for multilayer networks, recurrent networks and time-delay networks that allows any combinations of their components. In addition, the paper presents feed-forward approximation procedures for initial activations and external input values. The former one is used for optimizing starting values of the so-called context nodes, the latter one turned out to be very useful for finding spurious input attractors of a trained connectionist network. Finally, the authors compare time, processor and space complexities of the algorithm with backpropagation for an unfolded-in-time network and present some simulation results. (Copyright (c) 1990 GMD.)

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号