首页> 外文期刊>IEEE computational intelligence magazine >Distributed Reservoir Computing with Sparse Readouts [Research Frontier]
【24h】

Distributed Reservoir Computing with Sparse Readouts [Research Frontier]

机译:稀疏读数的分布式储层计算[研究前沿]

获取原文
获取原文并翻译 | 示例

摘要

In a network of agents, a widespread problem is the need to estimate a common underlying function starting from locally distributed measurements. Real-world scenarios may not allow the presence of centralized fusion centers, requiring the development of distributed, message-passing implementations of the standard machine learning training algorithms. In this paper, we are concerned with the distributed training of a particular class of recurrent neural networks, namely echo state networks (ESNs). In the centralized case, ESNs have received considerable attention, due to the fact that they can be trained with standard linear regression routines. Based on this observation, in our previous work we have introduced a decentralized algorithm, framed in the distributed optimization field, in order to train an ESN. In this paper, we focus on an additional sparsity property of the output layer of ESNs, allowing for very efficient implementations of the resulting networks. In order to evaluate the proposed algorithm, we test it on two well-known prediction benchmarks, namely the Mackey-Glass chaotic time series and the 10th order nonlinear auto regressive moving average (NARMA) system.
机译:在代理网络中,一个普遍的问题是需要从本地分布的测量值开始估算一个共同的基础功能。现实世界中的场景可能不允许存在集中的融合中心,这需要开发标准机器学习训练算法的分布式消息传递实现。在本文中,我们关注特定类别的递归神经网络(即回声状态网络(ESN))的分布式训练。在集中式情况下,由于ESN可以使用标准的线性回归例程进行训练,因此备受关注。基于此观察,在我们之前的工作中,我们引入了一种分布式算法,在分布式优化领域中进行了框架化,以训练ESN。在本文中,我们集中于ESN的输出层的其他稀疏属性,以允许非常高效地实现所得网络。为了评估该算法,我们在Mackey-Glass混沌时间序列和10阶非线性自回归移动平均(NARMA)系统这两个著名的预测基准上对其进行了测试。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号