首页> 外文学位 >Human-Centered Explainable Artificial Intelligence for Anomaly Detection in Quality Inspection: A Collaborative Approach to Bridge the Gap between Humans and AI
【24h】

Human-Centered Explainable Artificial Intelligence for Anomaly Detection in Quality Inspection: A Collaborative Approach to Bridge the Gap between Humans and AI

机译:以人为中心的可解释人工智能在质量检查中进行异常检测:一种弥合人类与 AI 之间差距的协作方法

获取原文
获取原文并翻译 | 示例

摘要

In the quality inspection industry, the use of Artificial Intelligence (AI) continues to advance to produce safer and faster autonomous systems that can perceive, learn, decide, and act independently. As observed by the researcher interacting with the local energy company over a one-year period, these AI systems' performance is limited by the machine's current inability to explain its decisions and actions to human users. Especially in energy companies, Explainable-AI (XAI) is critical to achieve speed, reliability, and trustworthiness with human inspection workers. Placing humans alongside AI will establish a sense of trust that augments the individual's capabilities at the workplace. To achieve such an XAI system centered around humans, it is necessary to design and develop more explainable AI models. Incorporating these XAI systems centered around human workers in the inspection industry brings a significant shift in conducting visual inspections. Adding this explainability factor to the AI intelligent inspection systems makes the decision-making process more sustainable and trustworthy by bringing a collaborative approach. Currently, there is a lack of trust between the inspection workers and AI, creating uncertainty among inspection workers about the use of the existing AI models. To address this gap, the purpose of this qualitative research study was to explore and understand the need for human-centered XAI systems to detect anomalies in quality inspection in energy industries.
机译:在质量检测行业,人工智能 (AI) 的使用不断进步,以生产更安全、更快速的自主系统,这些系统可以独立感知、学习、决策和行动。正如研究人员在一年内与当地能源公司互动所观察到的那样,这些 AI 系统的性能受到机器当前无法向人类用户解释其决策和行动的限制。特别是在能源公司中,Explainable-AI (XAI) 对于实现人工检查人员的速度、可靠性和可信度至关重要。将人类与 AI 放在一起将建立一种信任感,从而增强个人在工作场所的能力。为了实现这样一个以人类为中心的 XAI 系统,有必要设计和开发更具可解释性的 AI 模型。将这些以人类工人为中心的 XAI 系统整合到检测行业中,为进行目视检查带来了重大转变。将这种可解释性因素添加到 AI 智能检测系统中,通过引入协作方法,使决策过程更具可持续性和可信度。目前,检查人员和 AI 之间缺乏信任,这给检查人员带来了对现有 AI 模型的使用的不确定性。为了解决这一差距,这项定性研究的目的是探索和了解以人为本的 XAI 系统检测能源行业质量检查异常的需求。

著录项

  • 作者

    Vemula, Srikanth.;

  • 作者单位

    University of the Incarnate Word.;

    University of the Incarnate Word.;

    University of the Incarnate Word.;

  • 授予单位 University of the Incarnate Word.;University of the Incarnate Word.;University of the Incarnate Word.;
  • 学科 Artificial intelligence.;Computer science.
  • 学位
  • 年度 2022
  • 页码 113
  • 总页数 113
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Artificial intelligence.; Computer science.;

    机译:人工智能。;计算机科学。;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号