首页> 外文会议>Annual conference on Neural Information Processing Systems >Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models
【24h】

Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models

机译:具有概率人口代码和主题模型的神经回路中的复杂推理

获取原文
获取外文期刊封面目录资料

摘要

Recent experiments have demonstrated that humans and animals typically reason probabilistically about their environment. This ability requires a neural code that represents probability distributions and neural circuits that are capable of implementing the operations of probabilistic inference. The proposed probabilistic population coding (PPC) framework provides a statistically efficient neural representation of probability distributions that is both broadly consistent with physiological measurements and capable of implementing some of the basic operations of probabilistic inference in a biologically plausible way. However, these experiments and the corresponding neural models have largely focused on simple (tractable) probabilistic computations such as cue combination, coordinate transformations, and decision making. As a result it remains unclear how to generalize this framework to more complex probabilistic computations. Here we address this short coming by showing that a very general approximate inference algorithm known as Variational Bayesian Expectation Maximization can be naturally implemented within the linear PPC framework. We apply this approach to a generic problem faced by any given layer of cortex, namely the identification of latent causes of complex mixtures of spikes. We identify a formal equivalent between this spike pattern demixing problem and topic models used for document classification, in particular Latent Dirichlet Allocation (LDA). We then construct a neural network implementation of variational inference and learning for LDA that utilizes a linear PPC. This network relies critically on two non-linear operations: divisive normalization and super-linear facilitation, both of which are ubiquitously observed in neural circuits. We also demonstrate how online learning can be achieved using a variation of Hebb's rule and describe an extension of this work which allows us to deal with time varying and correlated latent causes.
机译:最近的实验表明,人类和动物通常会根据概率推断其环境。此功能需要表示概率分布的神经代码和能够实现概率推理操作的神经电路。提出的概率种群编码(PPC)框架提供了概率分布的统计有效神经表示,该分布既与生理测量结果大致一致,又能够以生物学上合理的方式实现概率推断的一些基本操作。但是,这些实验和相应的神经模型主要集中在简单(易处理)的概率计算上,例如提示组合,坐标转换和决策。结果,仍然不清楚如何将这个框架推广到更复杂的概率计算中。在这里,我们通过显示可以在线性PPC框架中自然实现称为变分贝叶斯期望最大化的非常通用的近似推理算法来解决这一问题。我们将这种方法应用于任何给定皮质层所面临的通用问题,即确定复杂的尖峰混合物的潜在原因。我们在此尖峰图样混合问题和用于文档分类的主题模型(尤其是潜在狄利克雷分配(LDA))之间确定了形式上的等价物。然后,我们为利用线性PPC的LDA构建变分推理和学习的神经网络实现。该网络主要依赖于两个非线性运算:除法归一化和超线性简化,这两种运算在神经回路中普遍存在。我们还演示了如何使用Hebb规则的变体来实现在线学习,并描述了这项工作的扩展,使我们能够处理时变和相关的潜在原因。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号