...
首页> 外文期刊>IEEE Transactions on Neural Networks >Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks
【24h】

Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks

机译:重量扰动:模拟VLSI前馈和递归多层网络的最佳架构和学习技术

获取原文
获取原文并翻译 | 示例

摘要

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called 'weight perturbation') is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented.
机译:以前在芯片上学习多层感知器的模拟VLSI实现的工作主要针对算法的实现,例如反向传播。尽管反向传播非常有效,但其在模拟VLSI中的实现却需要过多的计算硬件。结果表明,对于并行模拟实现,使用具有直接逼近梯度而不是反向传播的梯度下降更为经济。结果表明,这种技术(称为“重量扰动”)也适用于多层递归网络。以离散级模拟实现为例,该实现以XOR网络的训练为例。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号