首页> 外文期刊>Kunstliche Intelligenz >One Explanation Does Not Fit All The Promise of Interactive Explanations for Machine Learning Transparency
【24h】

One Explanation Does Not Fit All The Promise of Interactive Explanations for Machine Learning Transparency

机译:一个解释不适合机器学习透明度的互动解释的所有承诺

获取原文
获取原文并翻译 | 示例
       

摘要

The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system's operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations-a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up "What if?" questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats. Furthermore, we discuss the challenges of mirroring the explainee's mental model, which is the main building block of intelligible human-machine interactions. We also deliberate on the risks of allowing the explainee to freely manipulate the explanations and thereby extracting information about the underlying predictive model, which might be leveraged by malicious actors to steal or game the model. Finally, building an end-to-end interactive explainability system is a challenging engineering task; unless the main goal is its deployment, we recommend "Wizard of Oz" studies as a proxy for testing and evaluating standalone interactive explainability algorithms.
机译:基于机器学习算法的预测系统透明度的需求是由于其在行业中不断增长的影响。每当黑匣子算法预测影响人类事务时,应仔细审查这些算法的内部工作,并对相关利益相关者解释的决定,包括系统工程师,系统的运营商和正在决定其案件的个人。虽然提供了各种可解释性和解释性方法,但它们都不是一个灵丹妙药,可以满足各方可能需要的各种期望和竞争目标。通过讨论使用对比解释的示例来讨论交互式机器学习的承诺,通过对比解释的示例来讨论交互式机器学习的承诺 - 通过对比解释的示例来解决这篇论文的挑战 - 这是一种最先进的机器学习方法。具体来说,我们展示了如何通过交互式调整其条件陈述来个性化反事实解释,并通过提出随访“如果其中何时何地提取其他解释问题。我们在建设,部署和呈现此类系统方面的经验使我们能够列出所需的属性以及潜在的限制,可用于指导交互式解释者的开发。在自定义交互介质的同时,即,包括各种通信信道的用户界面,可以给出个性化的印象,我们认为调整解释本身及其内容更为重要。为此,除了明确地通知其局限性和警告之外,还必须考虑诸如解释的广度,范围,上下文,目的和目标的特性,如宽度,范围,上下文,目的和目标。此外,我们讨论了镜像Admiceee的心理模型的挑战,这是可理解的人机交互的主要建筑块。我们还考虑了允许Adsoxee自由操纵解释的风险,从而提取有关底层预测模型的信息,这可能被恶意演员窃取或游戏模型的借用。最后,构建端到端的交互式解释系统是一个具有挑战性的工程任务;除非主要目标是其部署,否则我们将“oz向导”研究作为测试和评估独立交互式解释性算法的代理。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号