The growing use of algorithms in social and economic life has raised a concern: that they may inadvertently discriminate against certain groups. For example, one recent study found that natural language processing algorithms can embody basic gender biases, such as associating the word nurse more closely with the word she than with the word he (Caliskan, Bryson, and Narayanan 2017). Because the data used to train these algorithms are themselves tinged with stereotypes and past discrimination, it is natural to worry that biases are being "baked in."We consider this problem in the context of a specific but important case, one that is particularly amenable to economic analysis: using algorithmic predictions to guide decisions (Kleinberg et al. 2015). For example, predictions about a defendant's safety risk or flight risk are increasingly being proposed as a means to guide judge decisions about whether to grant bail. Discriminatory predictions in these cases could have large consequences. One can easily imagine how this could happen since recidivism predictions will be polluted by the fact that past arrests themselves may be racially biased. In fact, a recent ProPublica investigation arguedthat the risk tool used in one Florida county was in fact discriminatory (Angwin et al. 2016). This widely-read article helped further elevate concerns about fairness within the policy and research communities alike, with subsequent work showing that the trade-offs are more subtle than was initially apparent.
展开▼