首页> 外文期刊>Complexity >Significance of Joint-Spike Events Based on Trial-Shuffling by Efficient Combinatorial Methods
【24h】

Significance of Joint-Spike Events Based on Trial-Shuffling by Efficient Combinatorial Methods

机译:基于有效组合法的随机混洗联合加长事件的意义

获取原文
获取原文并翻译 | 示例
       

摘要

The assembly hypothesis suggests that information processing in the cortex is mediated by groups of neurons expressed by their coordinated spiking activity. Thus, the unitary events analysis was designed to detect the presence of conspicuous joint-spike events in multiple single-unit recordings and to evaluate their statistical significance. The null hypothesis of the associated test assumes independent Poisson processes and leads to parametric significance estimation. In order to allow for arbitrary processes here we suggest to base the significance estimation on trial shuffling and resampling. In this scheme the null hypothesis is implemented by combining spike trains from nonsimultaneous trials and counting the joint-spike events. The coincidence distribution serving for the significance estimation is generated by repetitive resampling. The number of all possible recombinations, however, grows dramatically with the number of trials and neurons and thus is not practical for a user-interactive implementation of the analysis. We have suggested a Monte-Carlo-based resampling procedure and demonstrated that the procedure yields an appropriate estimate of the distribution and reliable significance estimation. In contrast, here, we present an exact solution. Rewriting the statistical problem in terms of certain macrostates, we are able to systematically sample the coincidence counts from all trial combinations. In addition we restrict the generating process to those counts forming the relevant tail of the distribution. The computationally effective implementation uses the concept of partitions.
机译:组装假说表明,皮层中的信息处理是由神经元的协同加标活动所表达的群体介导的。因此,单一事件分析被设计为检测多个单个单位记录中明显的联合尖峰事件的存在并评估其统计意义。相关测试的原假设假设独立的泊松过程,并导致参数重要性估计。为了允许在这里进行任意处理,我们建议将有效性评估基于试验改组和重采样。在该方案中,零假设是通过组合来自非同时试验的峰值序列并计算联合峰值事件来实现的。通过重复重采样来生成用于重要性估计的符合分布。但是,所有可能重组的数量会随着试验和神经元数量的增加而急剧增加,因此对于用户交互执行分析来说并不实用。我们提出了一种基于蒙特卡洛的重采样程序,并证明了该程序可得出适当的分布估计值和可靠的重要性估计值。相反,在这里,我们提出了一个精确的解决方案。根据某些宏观状态重写统计问题,我们能够系统地从所有试验组合中采样符合计数。另外,我们将生成过程限制为形成分布相关尾部的那些计数。在计算上有效的实现使用分区的概念。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号