首页> 外文会议>World multiconference on systems, cybernetics and informatics >Training of Recurrent ANN with Time Decreased Activation by GA to the Forecast in Dynamic Problems
【24h】

Training of Recurrent ANN with Time Decreased Activation by GA to the Forecast in Dynamic Problems

机译:随着时间的推移培训经常性ANN,通过GA对动态问题的预测减少了激活

获取原文

摘要

In this paper, we state an evolution of the recurrent ANN (RANN) to enforce the persistence of activations within the neurons to create activation contexts that generate correct outputs through time. In this new focus we want to file more information in the neuron's connections. To do this, the connection's representation goes from the unique values up to a function that generates the neuron's output. The training process to this type of ANN has to calculate the gradient that identifies the Junction. To train this RANN we developed a GA based system that find the best gradient set to solve each problem.
机译:在本文中,我们说明了经常性Ann(Rann)的演变,以强制在神经元内的激活持续到创建通过时间生成正确输出的激活上下文。在这种新的焦点中,我们希望在神经元的连接中提交更多信息。为此,连接的表示从唯一的值达到生成神经元输出的函数。这种类型ANN的培训过程必须计算识别交界处的梯度。培训此RANN我们开发了一种基于GA的系统,该系统找到了解每个问题的最佳渐变集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号