首页> 外文期刊>Knowledge Technology & Policy >Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning
【24h】

Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning

机译:反对解释性:对机器学习中解释性问题的关键审查

获取原文
获取原文并翻译 | 示例
       

摘要

The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a "black box problem." That is, machine learning algorithms are "opaque" to human users, failing to be "interpretable" or "explicable" in terms that would render categorization procedures "understandable." The purpose of this paper is to challenge the widespread agreement about the existence and importance of a black box problem. The first section argues that "interpretability" and cognates lack precise meanings when applied to algorithms. This makes the concepts difficult to use when trying to solve the problems that have motivated the call for interpretability (etc.). Furthermore, since there is no adequate account of the concepts themselves, it is not possible to assess whether particular technical features supply formal definitions of those concepts. The second section argues that there are ways of being a responsible user of these algorithms that do not require interpretability (etc.). In many cases in which a black box problem is cited, interpretability is a means to a further end such as justification or non-discrimination. Since addressing these problems need not involve something that looks like an "interpretation" (etc.) of an algorithm, the focus on interpretability artificially constrains the solution space by characterizing one possible solution as the problem itself. Where possible, discussion should be reformulated in terms of the ends of interpretability.
机译:机器学习算法的有用性导致他们在开发概念性框架之前广泛的采用,以便有意义。对这种情况的一个共同响应是说机器学习遭受了“黑匣子问题”。也就是说,机器学习算法是人类用户的“不透明”,未能“可解释”或“可解析”,以使分类程序“可以理解”。本文的目的是挑战对黑匣子问题存在和重要性的广泛协议。第一部分认为“解释性”和同源在应用于算法时缺乏精确含义。这使得在尝试解决动机的问题时难以使用的概念(等)。此外,由于没有足够的概念本身叙述,因此不可能评估特定技术功能是否提供这些概念的正式定义。第二部分认为,存在这些算法的负责任的用户,这些算法不需要解释性(等)。在引用黑匣子问题的许多情况下,可解释性是进一步结束的方法,例如理由或非歧视。由于解决这些问题,不需要涉及看起来像算法的“解释”(等)的东西,因此通过将一个可能的解决方案本身表征一个可能的解决方案,对解释性的焦点是人为地限制了解决方案。在可能的情况下,应在可解释性的目的方面进行讨论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号