首页> 外文会议>First ACL workshop on ethics in natural language processing >Building Better Open-Source Tools to Support Fairness in Automated Scoring
【24h】

Building Better Open-Source Tools to Support Fairness in Automated Scoring

机译:构建更好的开源工具以支持自动计分中的公平性

获取原文
获取原文并翻译 | 示例

摘要

Automated scoring of written and spoken responses is an NLP application that can significantly impact lives especially when deployed as part of high-stakes tests such as the GRE® and the TOEFL®. Ethical considerations require that automated scoring algorithms treat all test-takers fairly. The educational measurement community has done significant research on fairness in assessments and automated scoring systems must incorporate their recommendations. The best way to do that is by making available automated, non-proprietary tools to NLP researchers that directly incorporate these recommendations and generate the analyses needed to help identify and resolve biases in their scoring systems. In this paper, we attempt to provide such a solution.
机译:NLP应用程序可以自动对书面和口头答复进行评分,特别是在作为高风险测试(例如GRE®和TOEFL®)的一部分进行部署时,它可以极大地影响生活。出于道德考虑,要求自动评分算法必须公平对待所有应试者。教育评估界已经对评估的公平性进行了大量研究,自动评分系统必须纳入其建议。最好的方法是向NLP研究人员提供可用的自动化非专有工具,这些工具直接纳入这些建议并生成有助于识别和解决评分系统偏差的分析。在本文中,我们尝试提供这样的解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号