首页> 外文会议>International Conference on Artificial Intelligence in HCI;International Conference on Human-Computer Interaction >Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support
【24h】

Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support

机译:人工智能交互中的透明度和信任:不可知模型解释在基于计算机视觉的决策支持中的作用

获取原文

摘要

Computer Vision, and hence Artificial Intelligence-based extraction of information from images, has increasingly received attention over the last years, for instance in medical diagnostics. While the algorithms' complexity is a reason for their increased performance, it also leads to the 'black box' problem, consequently decreasing trust towards AI. In this regard, "Explainable Artificial Intelligence" (XAI) allows to open that black box and to improve the degree of AI transparency. In this paper, we first discuss the theoretical impact of explainability on trust towards AI, followed by showcasing how the usage of XAI in a health-related setting can look like. More specifically, we show how XAI can be applied to understand why Computer Vision, based on deep learning, did or did not detect a disease (malaria) on image data (thin blood smear slide images). Furthermore, we investigate, how XAI can be used to compare the detection strategy of two different deep learning models often used for Computer Vision: Convolutional Neural Network and Multi-Layer Percep-tron. Our empirical results show that ⅰ) the AI sometimes used questionable or irrelevant data features of an image to detect malaria (even if correctly predicted), and ⅱ) that there may be significant discrepancies in how different deep learning models explain the same prediction. Our theoretical discussion highlights that XAI can support trust in Computer Vision systems, and AI systems in general, especially through an increased understandability and predictability.
机译:在过去的几年中,计算机视觉以及基于人工智能的图像信息提取在医学诊断等领域日益受到关注。虽然算法的复杂性是它们提高性能的原因,但它也导致了“黑匣子”问题,从而降低了对AI的信任度。在这方面,“可解释的人工智能”(XAI)允许打开该黑匣子并提高AI透明度的程度。在本文中,我们首先讨论可解释性对AI信任的理论影响,然后展示在健康相关环境中使用XAI的样子。更具体地说,我们展示了如何使用XAI来理解基于深度学习的Computer Vision为什么在图像数据(稀薄的血液涂片幻灯片图像)上检测到或未检测到疾病(疟疾)。此外,我们研究了如何使用XAI比较经常用于计算机视觉的两种不同深度学习模型的检测策略:卷积神经网络和多层Percep-tron。我们的经验结果表明,ⅰ)AI有时使用图像的可疑或不相关的数据特征来检测疟疾(即使正确预测了疟疾),以及ⅱ)不同的深度学习模型解释相同预测的方式可能存在重大差异。我们的理论讨论着重指出,XAI可以支持对计算机视觉系统以及整个AI系统的信任,尤其是通过提高可理解性和可预测性来实现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号