【24h】

Assessing Performance Competence in Training Games

机译:评估训练比赛中的表现能力

获取原文

摘要

In-process assessment of trainee learners in game-based simulators is a challenging activity. This typically involves human instructor time and cost, and does not scale to the one tutor per learner vision of computer-based learning. Moreover, evaluation from a human instructor is often subjective and comparisons between learners are not accurate. Therefore, in this paper, we propose an automated, formula-driven quantitative evaluation method for assessing performance competence in serious training games. Our proposed method has been empirically validated in a game-based driving simulator using 7 subjects and 13 sessions, and accuracy up to 90.25% has been achieved when compared to an existing qualitative method. We believe that by incorporating quantitative evaluation methods like these future training games could be enriched with more meaningful feedback and adaptive game-play so as to better monitor and support player motivation, engagement and learning performance.
机译:在基于游戏的模拟器中对学员学习者进行过程中的评估是一项具有挑战性的活动。这通常涉及人工教师的时间和成本,并且不会扩展到每个基于计算机学习的学习者视野的一位导师。而且,来自人类教练的评估通常是主观的,并且学习者之间的比较是不准确的。因此,在本文中,我们提出了一种自动的,由公式驱动的定量评估方法,用于评估严重训练游戏中的表现能力。我们的建议方法已在基于游戏的驾驶模拟器中使用7个主题和13个会话进行了经验验证,与现有的定性方法相比,其准确性高达90.25%。我们相信,通过结合诸如此类未来训练游戏之类的量化评估方法,可以通过更有意义的反馈和自适应游戏方式来充实自己,从而更好地监控和支持玩家的动力,参与度和学习表现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号