...
首页> 外文期刊>Frontiers in Plant Science >Opening Pandora's box: cause and impact of errors on plant pigment studies
【24h】

Opening Pandora's box: cause and impact of errors on plant pigment studies

机译:打开潘朵拉盒子:错误对植物色素研究的原因和影响

获取原文
           

摘要

Too many errors in scientific publications, too many in plant sciences, too Today, there is an astonishing volume of scientific information available for researchers, which can be easily accessed through powerful search tools. Yet, the question now is whether all this vast amount of information is reliable. In this sense, a “bad science” controversy arose recently when many Open Access (OA) journals (more than a half) published a false, error-ridden paper, which had been submitted in order to test the publishing ethics of these journals (Bohannon, 2013 ). This fake article was published mainly by fraudulent journals, but it was also accepted by a number of OA journals of renowned publishers with peer-review systems. The failure to reject an article full of errors revealed that the system's gearbox is leaking somewhere. The carelessness of peer-reviews in a number of OA journals has opened a Pandora's Box, and what is more disconcerting, nobody can guarantee that it could not also affect regular journals (non OA). Traditionally, it has been assumed that scientific journals should detect and correct all these failings through the peer review before publication. Regrettably, as we show in this communication, the system is far from being perfect (Pulverer, 2010 ; Székely et al., 2014 ). Whilst the detection of laboratory errors is an issue of great attention in medicine (Bonini et al., 2002 ; Carraro and Plebani, 2007 ; Hammerling, 2012 ), in experimental science, it does not seem to be a crucial task. However, we were aware of this concern when we performed a literature compilation with the aim of providing a comprehensive evaluation of the responses of photosynthetic pigment composition to environmental conditions (Esteban et al., 2015 ). In this survey, we compiled data from 525 papers from the last 21 years (1991–2011). After carrying out a critical analysis of the data, a considerable number of papers, 96 out of 525, were found to have data out of range, errors and inconsistencies in at least one of the parameters reported. In order to detect these errors, we used as an initial screening tool three standard deviations from the mean. Data outside this interval were subsequently examined (Osborne and Overbay, 2004 ), in order to identify whether errors had arisen from the inherent variability of the data or from mistakes in the data (for a detailed description see Esteban et al., 2015 ). We decided to carry out an in-depth analysis of these controversial data, establishing a new background for our study. In this sense, data was re-evaluated and classified on the basis of the type of error: I, II, and III. Error I ( n = 46) includes those articles containing values out of range (outside ±3 standard deviation from the mean), likely due to pre-analytical and analytical flaws. Error II ( n = 37) refers to the presence of wrong values (most of them higher than 1000-fold reference values), most likely caused by post-analytical errors. Error III ( n = 13) includes those articles with mistaken units, probably included during the final phase of publishing and editing. Discarding the non-ethical manipulation of data, errors may occur as a result of experimental (analytical or methodological), mathematical or editing errors. For the case studied here (pigment determinations), we have identified several potential sources of error, which may occur at any stage of the research process: (i) Methodological errors during pre-analytical and analytical phase : inappropriate specimen/sample collection and preservation, labeling errors, wrong biomass/leaf area measurement, incomplete extraction, malfunction of instruments, incorrect compound identification, pipetting errors, etc. (ii) Data analysis errors in the post-analytical phase : handling of mathematics in spreadsheets, mistakes in the preparation of graphics or tables, improper data entry, and failure in reporting; (iii) Publishing and editing errors: confusion in units, typing errors such as Latin instead of Greek letters or errors in graph scales (see Table S1 for complete list of errors, tips and solutions). The data published in regular papers do not allow an assessment to be made of which of these aforementioned error sources is the cause of the mistakes. However, in other fields, such as analytical medicine, in which traceability is easier thanks to the application of quality assurance protocols, most errors occur during the pre- and post-analytical phases (Hammerling, 2012 ). How, where and when these errors appear As an example of the inaccuracy of pigment measurements, we have performed an analysis of data published on pigment composition in the model plant Arabidopsis thaliana ecotype Columbia including all the available literature published in the major journals in this field during the period 1991–2011 (see Esteban et al., 2015 for details). It is assumed that plants cultivated under similar conditions in different laboratories should not differ greatly in th
机译:如今,科学出版物中的错误太多了,植物科学中的错误太多了。如今,可供研究人员使用的科学信息数量惊人,可以通过功能强大的搜索工具轻松访问。但是,现在的问题是所有这些大量信息是否可靠。从这个意义上讲,最近,当许多开放获取(OA)期刊(超过一半)发表了一篇虚假的,充满错误的论文时,就引发了一场“糟糕的科学”争议,该论文的提交是为了检验这些期刊的出版道德( Bohannon,2013年)。这篇伪造的文章主要是由欺诈性期刊出版的,但也被许多具有同行评审系统的著名出版商的OA期刊接受。未能拒绝一篇充满错误的文章,这表明该系统的变速箱正在泄漏某处。许多OA期刊的同行评审的粗心大意打开了Pandora的盒子,更令人不安的是,没有人可以保证它也不会影响常规期刊(非OA)。传统上,人们认为科学期刊应该在出版之前通过同行评审来发现并纠正所有这些失败。令人遗憾的是,正如我们在本次交流中所展示的那样,该系统远非完美(Pulverer,2010;Székely等,2014)。虽然实验室错误的检测是医学界非常关注的问题(Bonini等,2002; Carraro和Plebani,2007; Hammerling,2012),但在实验科学中,这似乎并不是一项至关重要的任务。然而,当我们进行文献汇编以提供对光合色素成分对环境条件的响应的综合评估时,我们意识到了这一担忧(Esteban等,2015)。在本次调查中,我们汇总了过去21年(1991-2011年)的525篇论文的数据。在对数据进行严格分析之后,发现有大量论文(525篇中有96篇)的数据超出范围,误差和至少所报告的一个参数不一致。为了检测这些错误,我们使用了三个均值的标准差作为初始筛选工具。随后检查了超出此间隔的数据(Osborne and Overbay,2004),以识别是由于数据的固有可变性还是由于数据中的错误而引起了错误(有关详细说明,请参见Esteban et al。,2015)。我们决定对这些有争议的数据进行深入分析,为我们的研究奠定新的背景。从这个意义上说,根据错误类型I,II和III对数据进行了重新评估和分类。误差I(n = 46)包括那些值超出范围(与平均值相比超出±3标准偏差)的那些物品,这可能是由于分析前和分析缺陷所致。误差II(n = 37)表示存在错误值(大多数高于1000倍参考值),这很可能是分析后错误引起的。错误III(n = 13)包括那些单元错误的文章,可能包括在发布和编辑的最后阶段。丢弃数据的非道德操作,可能由于实验(分析或方法论),数学或编辑错误而发生错误。对于此处研究的案例(色素确定),我们已经确定了几种潜在的误差源,这些误差源可能会在研究过程的任何阶段发生:(i)分析前和分析阶段的方法学错误:不适当的样品/样品收集和保存,标签错误,错误的生物质/叶面积测量,提取不完全,仪器故障,化合物识别错误,移液错误等。(ii)分析后阶段的数据分析错误:电子表格中的数学处理,制备错误图形或表格,不正确的数据输入以及报告失败; (iii)发布和编辑错误:单位混乱,键入诸如拉丁字母而不是希腊字母的错误或图形比例尺的错误(有关错误,提示和解决方案的完整列表,请参见表S1)。常规论文中发表的数据不允许对上述错误源中的哪一个是错误原因进行评估。但是,在其他领域(例如分析医学)中,由于应用了质量保证协议,可追溯性变得更容易,大多数错误发生在分析之前和之后的阶段(Hammerling,2012年)。如何,在何时何地出现这些错误作为色素测量结果不准确的一个例子,我们对模型植物拟南芥生态型哥伦比亚的色素成分数据进行了分析,包括该领域主要期刊上发表的所有可用文献。在1991年至2011年期间(详见Esteban等,2015年)。假定在不同实验室中以类似条件种植的植物的差异不大。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号