...
首页> 外文期刊>Applied mathematics letters >A remark on the error-backpropagation learning algorithm for spiking neural networks
【24h】

A remark on the error-backpropagation learning algorithm for spiking neural networks

机译:尖峰神经网络的误差反向传播学习算法的一点评论

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In the error-backpropagation learning algorithm for spiking neural networks, one has to differentiate the firing time tα as a functional of the state function x(t). But this differentiation is impossible to perform directly since tα cannot be formulated in a standard form as a functional of x(t). To overcome this difficulty, Bohte et al. (2002) [1] assume that there is a linear relationship between the firing time tα and the state x(t) around t=tα. In terms of this assumption, the Frechet derivative of the functional is equal to the derivative of an ordinary function that can be computed directly and easily. Our contribution in this short note is to prove that this equality of differentiations is in fact mathematically correct, without the help of the linearity assumption.
机译:在用于尖峰神经网络的错误反向传播学习算法中,必须区分触发时间tα作为状态函数x(t)的函数。但是这种区分是不可能直接进行的,因为不能以标准形式将tα表示为x(t)的函数。为了克服这个困难,Bohte等。 (2002年)[1]假设点火时间tα与状态t(t)在t =tα附近存在线性关系。根据此假设,泛函的Frechet导数等于可以直接轻松计算的普通函数的导数。我们在此简短说明中所做的贡献是证明,在没有线性假设的帮助下,这种微分相等实际上在数学上是正确的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号