Say you apply for home insurance and get turned down. You ask why, and the company explains its reasoning: Your neighborhood is at high risk for flooding, or your credit is dodgy. Fair enough. Now imagine you apply to a firm that uses a machine-learning system, instead of a human with an actuarial table, to predict insurance risk. After crunching your info-age, job, house location and value-the machine decides, nope, no policy for you. You ask the same question: "Why?" Now things get weird. Nobody can answer, because nobody understands how these systems-neural networks modeled on the human brain-produce their results. Computer scientists "train" each one by feeding it data, and it gradually learns. But once a neural net is working well, it's a black box. Askits creator howit achieves a certain result and you'll likely get a shrug. The opacity of machine learning isn't just an academic problem. More and more places use the technology for everything from image recognition to medical diagnoses. All that decisionmaking is, by definition, unknowable-and that makes people uneasy. My friend Zeynep Tuf ekci, a sociologist, warns about "Moore's law plus inscrutability." Microsoft CEO Satya Nadella says we need "algorithmic accountability." All that is behind the fight to make machine learning more comprehensible.
展开▼