首页> 外文会议>International Conference on Artificial Neural Networks >Recurrent Neural Network Learning of Performance and Intrinsic Population Dynamics from Sparse Neural Data
【24h】

Recurrent Neural Network Learning of Performance and Intrinsic Population Dynamics from Sparse Neural Data

机译:稀疏神经数据的性能和内在人口动态的经常性神经网络学习

获取原文

摘要

Recurrent Neural Networks (RNNs) are popular models of brain function. The typical training strategy is to adjust their inputoutput behavior so that it matches that of the biological circuit of interest. Even though this strategy ensures that the biological and artificial networks perform the same computational task, it does not guarantee that their internal activity dynamics match. This suggests that the trained RNNs might end up performing the task employing a different internal computational mechanism. In this work, we introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics. We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model of motor cortical and muscle activity dynamics. Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons sampled from the biological network. Furthermore, we show that training the RNNs with this method significantly improves their generalization performance. Overall, our results suggest that the proposed method is suitable for building powerful functional RNN models, which automatically capture important computational properties of the biological circuit of interest from sparse neural recordings.
机译:经常性神经网络(RNNS)是大脑功能的流行模型。典型的培训策略是调整其输入小额行为,使其与生物学回路的感兴趣的电路匹配。尽管这种策略确保生物和人造网络执行相同的计算任务,但它并不能保证其内部活动动态匹配。这表明训练的RNN可能最终能够执行采用不同内部计算机制的任务。在这项工作中,我们介绍了一种新颖的培训策略,允许学习RNN的输入输出行为,而且允许学习其内部网络动态。我们通过训练RNN来测试所提出的方法,同时再现Motor皮质和肌肉活动动态的生理学上灵感神经模型的内部动态和输出信号。值得注意的是,即使培训算法依赖于从生物网络采样的小神经元的活动,内部动力学的再现也是成功的。此外,我们表明,使用这种方法的RNN训练显着提高了它们的泛化性能。总的来说,我们的结果表明,该方法适用于构建强大的功能性RNN模型,它自动捕获来自稀疏神经记录的生物学回路的重要计算属性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号