首页> 外文期刊>Neurocomputing >A sparse code increases the speed and efficiency of neuro-dynamic programming for optimal control tasks with correlated inputs
【24h】

A sparse code increases the speed and efficiency of neuro-dynamic programming for optimal control tasks with correlated inputs

机译:稀疏代码增加了神经动力量编程的速度和效率,以获得具有相关输入的最佳控制任务

获取原文
获取原文并翻译 | 示例

摘要

Sparse codes in neuroscience have been suggested to offer certain computational advantages over other neural representations of sensory data. To explore this viewpoint, a sparse code is used to represent natural images in an optimal control task solved with neuro-dynamic programming, and its computational properties are investigated. The central finding is that when feature inputs to a linear network are correlated, an over-complete sparse code increases the memory capacity of the network in an efficient manner beyond that possible for any complete code with the same-sized input, and also increases the speed of learning the network weights. A complete sparse code is found to maximise the memory capacity of a linear network by decorrelating its feature inputs to transform the design matrix of the least-squares problem to one of full rank. It also conditions the Hessian matrix of the least-squares problem, thereby increasing the rate of convergence to the optimal network weights. Other types of decorrelating codes would also achieve this. However, an over-complete sparse code is found to be approximately decorrelated, extracting a larger number of approximately decorrelated features from the same-sized input, allowing it to efficiently increase memory capacity beyond that possible for any complete code: a 2.25 times over-complete sparse code is shown to at least double memory capacity compared with a complete sparse code using the same input. This is used in sequential learning to store a potentially large number of optimal control tasks in the network, while catastrophic forgetting is avoided using a partitioned representation, yielding a cost-to-go function approximator that generalizes over the states in each partition. Sparse code advantages over dense codes and local codes are also discussed. Crown Copyright (C) 2020 Published by Elsevier B.V. All rights reserved.
机译:已经提出了神经科学的稀疏代码,以提供与感官数据的其他神经表示的某些计算优势。为了探索此观点,稀疏代码用于表示使用神经动态编程解决的最佳控制任务中的自然图像,并研究其计算属性。中央发现是当特征输入到线性网络时相关时,过度完全的稀疏代码以具有相同大小输入的任何完整代码的任何完整代码的有效方式增加了网络的存储器容量,并且还增加了学习网络权重的速度。发现完整的稀疏代码通过去相关性输入来最大化线性网络的内存容量,以将最小二乘问题的设计矩阵转换为完整排名中的一个。它还条件条件是最小二乘问题的Hessian矩阵,从而提高了对最佳网络权重的收敛速率。其他类型的去相关代码也将实现这一点。然而,过完备稀疏代码被发现是大约去相关,提取从相同大小的输入大约解相关特征的数量较多,允许其有效地增加存储器容量超出可能的任何完整的代码:一个2.25倍的过与使用相同输入的完整稀疏代码相比,完全稀疏代码至少显示为双内存容量。这用于顺序学习,用于存储网络中的潜在大量的最佳控制任务,而使用分区表示避免了灾难性忘记,产生了成本转到函数近似器,其通过每个分区中的状态概括。还讨论了稀疏的代码和本地代码的稀疏代码优势。皇家版权(c)2020由elestvier b.v发布。保留所有权利。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号