首页> 外文期刊>Journal of endourology >Crowd-Sourced Assessment of Technical Skills: Differentiating Animate Surgical Skill Through the Wisdom of Crowds
【24h】

Crowd-Sourced Assessment of Technical Skills: Differentiating Animate Surgical Skill Through the Wisdom of Crowds

机译:基于人群的技术技能评估:通过人群的智慧区分动画外科技能

获取原文
获取原文并翻译 | 示例
       

摘要

Background: Objective quantification of surgical skill is imperative as we enter a healthcare environment of quality improvement and performance-based reimbursement. The gold standard tools are infrequently used due to time-intensiveness, cost inefficiency, and lack of standard practices. We hypothesized that valid performance scores of surgical skill can be obtained through crowdsourcing. Methods: Twelve surgeons of varying robotic surgical experience performed live porcine robot-assisted urinary bladder closures. Blinded video-recorded performances were scored by expert surgeon graders and by Amazon's Mechanical Turk crowdsourcing crowd workers using the Global Evaluative Assessment of Robotic Skills tool assessing five technical skills domains. Seven expert graders and 50 unique Mechanical Turkers (each paid $0.75/survey) evaluated each video. Global assessment scores were analyzed for correlation and agreement. Results: Six hundred Mechanical Turkers completed the surveys in less than 5 hours, while seven surgeon graders took 14 days. The duration of video clips ranged from 2 to 11 minutes. The correlation coefficient between the Turkers' and expert graders' scores was 0.95 and Cronbach's Alpha was 0.93. Inter-rater reliability among the surgeon graders was 0.89. Conclusion: Crowdsourcing surgical skills assessment yielded rapid inexpensive agreement with global performance scores given by expert surgeon graders. The crowdsourcing method may provide surgical educators and medical institutions with a boundless number of procedural skills assessors to efficiently quantify technical skills for use in trainee advancement and hospital quality improvement.
机译:背景:随着我们进入以质量改进和按绩效报销的医疗环境,外科手术技能的客观量化势在必行。由于时间密集,成本低廉和缺乏标准做法,因此很少使用黄金标准工具。我们假设可以通过众包获得有效的手术技能表现分数。方法:十二位具有不同机器人手术经验的外科医生进行了现场猪机器人辅助膀胱闭合术。盲人的视频录制表演由专业的外科医生评分员和亚马逊的Mechanical Turk众包人群工作者使用机器人技能的全球评估评估工具对五个技术领域进行评估。每个视频评估了7名专业评分员和50名独特的Mechanical Turker(每人支付$ 0.75 /调查)的评估结果。分析全球评估分数的相关性和一致性。结果:六百名机械工在不到5小时的时间内完成了调查,而七名外科医生的评分员则花费了14天。视频剪辑的持续时间为2到11分钟。 Turkers和专家评分者的得分之间的相关系数是0.95,而Cronbach's Alpha是0.93。外科医生评分者之间的评分者间可靠性为0.89。结论:众包手术技能评估与专业外科医生评分员给出的全球绩效评分迅速达成了廉价协议。众包方法可以为外科教育工作者和医疗机构提供无数个程序技能评估者,以有效地量化用于学员发展和医院质量改善的技术技能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号