首页> 外文期刊>The Journal of Bone and Joint Surgery. American Volume >Levels of evidence at the AAOS meeting: can authors rate their own submissions, and do other raters agree?
【24h】

Levels of evidence at the AAOS meeting: can authors rate their own submissions, and do other raters agree?

机译:AAOS会议上的证据水平:作者可以对自己的意见书进行评分,其他评分者是否同意?

获取原文
获取原文并翻译 | 示例
       

摘要

BACKGROUND: A hierarchy of levels of evidence is commonly used to categorize the methodology of scientific studies in order to assist in their critical analysis. Organizers of large scientific meetings are faced with the problem of whether and how to assign levels of evidence to studies that are presented. The present study was performed to investigate two hypotheses: (1) that session moderators and others can consistently assign a level of evidence to papers presented at national meetings, and (2) that there is no difference between the level of evidence provided by the author of a paper and the level of evidence assigned by independent third parties (e.g., members of the Program Committee). METHODS: A subset of papers accepted for presentation at the 2007 American Academy of Orthopaedic Surgeons (AAOS) Annual Meeting was used to evaluate differences in the levels of evidence assigned by the authors, volunteer graders who had access to only the abstract, and session moderators who had access to the full paper. The approved AAOS levels of evidence were used. Statistical tests of interrater correlation were done to compare the various raters to each other, with significance appropriately adjusted for multiple comparisons. RESULTS: Interrater agreement was better than chance for most comparisons between different graders; however, the level of agreement ranged from slight to moderate (kappa=0.16 to 0.46), a finding confirmed by agreement coefficient statistics. In general, raters had difficulty in agreeing whether a study comprised Level-I or Level-II evidence and authors graded the level of evidence of their own work more favorably than did others who graded the abstract. CONCLUSIONS: When abstracts submitted to the AAOS Annual Meeting were rated, there was substantial inconsistency in the assignments of the level of evidence to a given study by different observers and there was some evidence that authors may not rate their own work the same as independent reviewers. This has important implications for the use of levels of evidence in scientific meetings.
机译:背景:证据水平通常用于对科学研究的方法进行分类,以帮助进行批判性分析。大型科学会议的组织者面临着是否以及如何为所提出的研究分配证据水平的问题。进行本研究是为了研究两个假设:(1)会议主持人和其他人可以一致地为在国家会议上发表的论文分配一定程度的证据,(2)作者提供的证据水平之间没有差异。文件的说明以及独立第三方(例如,计划委员会成员)分配的证据水平。方法:使用一部分论文在2007年美国骨科医师学会(AAOS)年会上发表的论文,评估作者,仅能获得摘要的志愿者评分者和会议主持人分配的证据水平差异谁有权访问全文。使用批准的AAOS证据水平。进行了相互间相关性的统计测试,以比较各个评估者,并针对多个比较进行了适当调整。结果:对于不同的评分者之间的大多数比较,评分者之间的一致性比机会更好。然而,一致性水平从轻微到中等(kappa = 0.16到0.46),一致性系数统计证实了这一发现。一般而言,评估者很难同意一项研究是包含I级证据还是II级证据,并且作者对自己工作的证据等级进行评分比对摘要进行评分的其他人更有利。结论:对提交给AAOS年会的摘要进行评级时,不同观察者对给定研究的证据水平分配存在很大的不一致,并且有一些证据表明作者可能不会将他们自己的作品评为与独立审稿人相同的等级。 。这对于科学会议中证据水平的使用具有重要意义。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号