首页> 美国卫生研究院文献>Frontiers in Neuroinformatics >Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models
【2h】

Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models

机译:复制多同步:最大化尖峰网络模型可重复性的指南

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Any modeler who has attempted to reproduce a spiking neural network model from its description in a paper has discovered what a painful endeavor this is. Even when all parameters appear to have been specified, which is rare, typically the initial attempt to reproduce the network does not yield results that are recognizably akin to those in the original publication. Causes include inaccurately reported or hidden parameters (e.g., wrong unit or the existence of an initialization distribution), differences in implementation of model dynamics, and ambiguities in the text description of the network experiment. The very fact that adequate reproduction often cannot be achieved until a series of such causes have been tracked down and resolved is in itself disconcerting, as it reveals unreported model dependencies on specific implementation choices that either were not clear to the original authors, or that they chose not to disclose. In either case, such dependencies diminish the credibility of the model's claims about the behavior of the target system. To demonstrate these issues, we provide a worked example of reproducing a seminal study for which, unusually, source code was provided at time of publication. Despite this seemingly optimal starting position, reproducing the results was time consuming and frustrating. Further examination of the correctly reproduced model reveals that it is highly sensitive to implementation choices such as the realization of background noise, the integration timestep, and the thresholding parameter of the analysis algorithm. From this process, we derive a guideline of best practices that would substantially reduce the investment in reproducing neural network studies, whilst simultaneously increasing their scientific quality. We propose that this guideline can be used by authors and reviewers to assess and improve the reproducibility of future network models.
机译:任何试图从论文描述中复制出尖峰神经网络模型的建模者都发现这是一件痛苦的事情。即使所有参数似乎都已指定(这很罕见),通常,重现网络的初始尝试也不会产生与原始出版物可识别的结果相似的结果。原因包括错误报告的参数或隐藏的参数(例如,错误的单位或初始化分布的存在),模型动力学实现的差异以及网络实验文本描述中的歧义。在找出并解决了一系列此类原因之前,通常无法实现足够的再现这一事实本身令人感到不安,因为它揭示了未报告的模型依赖于特定实现选择的情况,这些选择要么是原始作者不清楚的,要么是选择不透露。在任何一种情况下,这种依赖性都会降低模型有关目标系统行为的声明的可信度。为了说明这些问题,我们提供了一个再现经典研究的有效示例,该研究通常在发布时就提供了源代码。尽管这似乎是最佳的起始位置,但是再现结果既费时又令人沮丧。对正确复制的模型的进一步检查显示,它对实现选择高度敏感,例如背景噪声的实现,积分时间步长和分析算法的阈值参数。从这一过程中,我们得出了最佳实践指南,该指南将大大减少用于再现神经网络研究的投资,同时提高其科学质量。我们建议作者和审阅者可以使用此指南来评估和改善未来网络模型的可重复性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号