首页> 美国卫生研究院文献>Frontiers in Computational Neuroscience >Tracking Replicability as a Method of Post-Publication Open Evaluation
【2h】

Tracking Replicability as a Method of Post-Publication Open Evaluation

机译:跟踪可复制性作为出版后公开评估的一种方法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

Recent reports have suggested that many published results are unreliable. To increase the reliability and accuracy of published papers, multiple changes have been proposed, such as changes in statistical methods. We support such reforms. However, we believe that the incentive structure of scientific publishing must change for such reforms to be successful. Under the current system, the quality of individual scientists is judged on the basis of their number of publications and citations, with journals similarly judged via numbers of citations. Neither of these measures takes into account the replicability of the published findings, as false or controversial results are often particularly widely cited. We propose tracking replications as a means of post-publication evaluation, both to help researchers identify reliable findings and to incentivize the publication of reliable results. Tracking replications requires a database linking published studies that replicate one another. As any such database is limited by the number of replication attempts published, we propose establishing an open-access journal dedicated to publishing replication attempts. Data quality of both the database and the affiliated journal would be ensured through a combination of crowd-sourcing and peer review. As reports in the database are aggregated, ultimately it will be possible to calculate replicability scores, which may be used alongside citation counts to evaluate the quality of work published in individual journals. In this paper, we lay out a detailed description of how this system could be implemented, including mechanisms for compiling the information, ensuring data quality, and incentivizing the research community to participate.
机译:最近的报告表明许多公开的结果是不可靠的。为了提高已发表论文的可靠性和准确性,已经提出了多种更改,例如统计方法的更改。我们支持这种改革。但是,我们认为,科学出版的激励结构必须改变,这样的改革才能成功。在当前的制度下,根据科学家的出版物和引用次数来判断其质量,而对期刊的引用也通过引用次数来进行判断。这些措施均未考虑已发表发现的可复制性,因为虚假或有争议的结果通常被特别广泛地引用。我们建议跟踪复制作为发布后评估的一种方式,既可以帮助研究人员确定可靠的发现,又可以激励可靠结果的发布。跟踪复制需要一个数据库,该数据库链接已相互复制的已发表研究。由于任何这样的数据库都受到发布的复制尝试次数的限制,因此我们建议建立一个专门用于发布复制尝试的开放访问日志。通过众包采购和同行评审相结合,可以确保数据库和附属期刊的数据质量。随着数据库中报告的汇总,最终将可以计算可复制性分数,该分数可与引文计数一起用于评估在各期刊中发表的工作质量。在本文中,我们对如何实施该系统进行了详细说明,包括用于编译信息,确保数据质量和激励研究界参与的机制。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号