【24h】

Approaches Based on Markovian Architectural Bias in Recurrent Neural Networks

机译:递归神经网络中基于马尔可夫体系偏差的方法

获取原文
获取原文并翻译 | 示例

摘要

Recent studies show that state-space dynamics of randomly initialized recurrent neural network (RNN) has interesting and potentially useful properties even without training. More precisely, when initializing RNN with small weights, recurrent unit activities reflect history of inputs presented to the network according to the Markovian scheme. This property of RNN is called Markovian architectural bias. Our work focuses on various techniques that make use of architectural bias. The first technique is based on the substitution of RNN output layer with prediction model, resulting in capabilities to exploit interesting state representation. The second approach, known as echo state networks (ESNs), is based on large untrained randomly interconnected hidden layer, which serves as reservoir of interesting behavior. We have investigated both approaches and their combination and performed simulations to demonstrate their usefulness.
机译:最近的研究表明,即使不进行训练,随机初始化的递归神经网络(RNN)的状态空间动力学也具有有趣且潜在有用的特性。更准确地说,当使用较小的权重初始化RNN时,经常性单元活动会根据Markovian方案反映出呈现给网络的输入的历史记录。 RNN的此属性称为马尔可夫体系结构偏差。我们的工作集中在利用架构偏差的各种技术上。第一种技术基于将RNN输出层替换为预测模型,从而具有开发有趣的状态表示的能力。第二种方法称为回声状态网络(ESN),它基于大型未经训练的随机互连隐藏层,该层用作有趣行为的存储库。我们已经研究了这两种方法及其组合,并进行了仿真以证明其实用性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号