首页> 外文会议>IEEE/ACM International Conference on Automated Software Engineering >Making Fair ML Software using Trustworthy Explanation
【24h】

Making Fair ML Software using Trustworthy Explanation

机译:使用值得信赖的解释制作Fair ML软件

获取原文

摘要

Machine learning software is being used in many applications (finance, hiring, admissions, criminal justice) having huge social impact. But sometimes the behavior of this software is biased and it shows discrimination based on some sensitive attributes such as sex, race etc. Prior works concentrated on finding and mitigating bias in ML models. A recent trend is using instance-based model-agnostic explanation methods such as LIME[36] to find out bias in the model prediction. Our work concentrates on finding shortcomings of current bias measures and explanation methods. We show how our proposed method based on K nearest neighbors can overcome those shortcomings and find the underlying bias of black box models. Our results are more trustworthy and helpful for the practitioners. Finally, We describe our future framework combining explanation and planning to build fair software.
机译:机器学习软件正在许多应用(财务,招聘,招生,刑事司法)中使用巨大的社会影响。但有时这个软件的行为被偏见,并且它显示了基于某些敏感属性的歧视,例如性别,种族等。事先作品集中在ML模型中的发现和减轻偏差。最近的趋势是使用基于实例的模型 - 不可知说明方法,例如石灰[36],以在模型预测中找出偏差。我们的工作专注于发现当前偏见措施的缺点和解释方法。我们展示了我们基于K最近邻居的提出方法如何克服这些缺点并找到黑匣子型号的潜在偏见。我们的结果更值得信赖,为从业者提供帮助。最后,我们描述了我们未来的框架,结合了解释和计划来构建公平软件。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号