...
首页> 外文期刊>AI & society >Machine learning's limitations in avoiding automation of bias
【24h】

Machine learning's limitations in avoiding automation of bias

机译:机器学习避免偏差自动化的限制

获取原文
获取原文并翻译 | 示例

摘要

The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in which these methods are applied Solon and Selbst (Calif L REV 104: 671-732,2016) and Pedreschi et al. (2007). The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and emotion analysis, just to mention a few. However, dissimilar predictions can be found nowadays as the result of the application of these methods resulting in misclassification, for example for the case of conviction risk assessment Office of Probation and Pretrial Services (2011) or decision-making process when designing public policies Lange (2015). The goal of this paper is to identify current gaps on fairness achievement within the context of predictive systems in artificial intelligence by analyzing available academic and scientific literature up to 2020. To achieve this goal, we have gathered available materials at the Web of Science and Scopus from last 5 years and analyzed the different proposed methods and their results in relation to the bias as an emergent issue in the Artificial Intelligence field of study. Our tentative conclusions indicate that machine learning has some intrinsic limitations which are leading to automate the bias when designing predictive algorithms. Consequently, other methods should be explored; or we should redefine the way current machine learning approaches are being used when building decision making/ decision support systems for crucial institutions of our political systems such as the judicial system, just to mention one.
机译:随着相关的计算方法的发展,使用预测系统的使用已经更广泛,以及应用这些方法的SOLON和SELBST的研究的演变(CALIF L REV 104:671-732,2016)和Pedreschi等人。 (2007)。所提到的方法包括人工智能域内的机器学习技术,面部和/或语音识别,温度映射等。这些技术正在应用于解决社会和政治敏感领域的问题,例如预防犯罪和司法管理,人群管理和情感分析,只是提及一些。然而,由于这些方法的应用,可以发现不同的预测导致这些方法导致错误分类,例如用于监测和审前服务的定罪风险评估办公室(2011年)或在设计公共政策时的决策过程( 2015)。本文的目标是通过分析高达2020年的可用学术和科学文学来确定关于人工智能的预测系统的情况下的公平成果的当前差距。为了实现这一目标,我们在科学网络和SCOPUS网络上收集了可用材料从过去的5年来看,分析了不同拟议的方法及其与偏见相关的结果,作为人工智能研究领域的出现问题。我们的暂定结论表明,机器学习具有一些内在的限制,导致在设计预测算法时自动化偏差。因此,应探索其他方法;或者我们应该重新定义当前机器学习方法正在制定决策/决策支持系统,以获得司法系统等政治系统的关键机构,只是提及一个。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号