首页> 美国卫生研究院文献>Journal of Endourology >C-SATS: Assessing Surgical Skills Among Urology Residency Applicants
【2h】

C-SATS: Assessing Surgical Skills Among Urology Residency Applicants

机译:C-SATS:评估泌尿科住院医师的手术技能

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

>Background: We hypothesized that surgical skills assessment could aid in the selection process of medical student applicants to a surgical program. Recently, crowdsourcing has been shown to provide an accurate assessment of surgical skills at all levels of training. We compared expert and crowd assessment of surgical tasks performed by resident applicants during their interview day at the urology program at the University of California, Irvine.>Materials and Methods: Twenty-five resident interviewees performed four tasks: open square knot tying, laparoscopic peg transfer, robotic suturing, and skill task 8 on the LAP Mentor™ (Simbionix Ltd., Lod, Israel). Faculty experts and crowd workers (Crowd-Sourced Assessment of Technical Skills [C-SATS], Seattle, WA) assessed recorded performances using the Objective Structured Assessment of Technical Skills (OSATS), Global Evaluative Assessment of Robotic Skills (GEARS), and the Global Operative Assessment of Laparoscopic Skills (GOALS) validated assessment tools.>Results: Overall, 3938 crowd assessments were obtained for the four tasks in less than 3.5 hours, whereas the average time to receive 150 expert assessments was 22 days. Inter-rater agreement between expert and crowd assessment scores was 0.62 for open knot tying, 0.92 for laparoscopic peg transfer, and 0.86 for robotic suturing. Agreement between applicant rank on skill task 8 on the LAP Mentor assessment and crowd assessment was 0.32. The crowd match rank based solely on skills performance did not compare well with the final faculty match rank list (0.46); however, none of the bottom five crowd-rated applicants appeared in the top five expert-rated applicants and none of the top five crowd-rated applicants appeared in the bottom five expert-rated applicants.>Conclusions: Crowd-source assessment of resident applicant surgical skills has good inter-rater agreement with expert physician raters but not with a computer-based objective motion metrics software assessment. Overall applicant rank was affected to some degree by the crowd performance rating.
机译:>背景:我们假设手术技能评估可以帮助选择医学生申请人进行手术。最近,已经证明了众包可在所有级别的培训中提供对手术技能的准确评估。我们在加利福尼亚大学尔湾分校的泌尿科计划中比较了居民申请人在面试当天对外科手术任务的专家评估和人群评估。>材料和方法:二十五个居民受访者完成了四项任务:开放LAP Mentor™(Simbionix Ltd.,以色列,Lod)上的方结打结,腹腔镜下钉子转移,机器人缝合和技能任务8。教师专家和人群工作者(使用人群进行的技术技能评估[C-SATS],华盛顿州西雅图)使用目标结构的技术技能评估(OSATS),全球机器人技能评估(GEARS)和全球腹腔镜技能手术评估(GOALS)验证的评估工具。>结果:总体而言,在不到3.5小时的时间内,针对这四个任务获得了3938个人群评估,而平均接受150次专家评估的时间为22天。专家和人群评估分数之间的评估者之间的一致性为:打结结扎为0.62,腹腔镜钉栓转移为0.92,机器人缝合为0.86。 LAP指导者评估中技能任务8的申请人等级与人群评估之间的一致性为0.32。仅基于技能表现的人群比赛排名与最终的教师比赛排名列表(0.46)不能很好地比较;但是,排名前五位的人群中没有一个出现在前五位专家级别的申请人中,排名前五位的人群中没有一个出现在排名前五位的专家评价中。>结论:人群住院医师手术技能的多源评估与专家医师评估者之间具有良好的评估者之间的协议,但与基于计算机的客观运动指标软件评估相比却没有。总体表现在一定程度上受人群表现等级的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号