【24h】

The Impact of Explanations on AI Competency Prediction in VQA

机译:解释对VQA中AI能力预测的影响

获取原文

摘要

Explainability is one of the key elements for building trust in AI systems. Among numerous attempts to make AI explainable, quantifying the effect of explanations remains a challenge in conducting human-AI collaborative tasks. Aside from the ability to predict the overall behavior of AI, in many applications, users need to understand an AI agent's competency in different aspects of the task domain. In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA). We quantify users' understanding of competency, based on the correlation between the actual system performance and user rankings. We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model. Each group of users sees only one kind of explanation to rank the competencies of the VQA model. The proposed model is evaluated through between-subject experiments to probe explanations' impact on the user's perception of competency. The comparison between two VQA models shows BERT based explanations and the use of object features improve the user's prediction of the model's competencies.
机译:解释性是在AI系统中构建信任的关键要素之一。在众多尝试使AI解释中,量化解释的效果仍然是进行人类协作任务的挑战。除了能够预测AI的整体行为,在许多应用程序中,用户需要了解AI代理人在任务域的不同方面的能力。在本文中,我们评估了解释对视觉问题的任务(VQA)的AI代理能力的用户心理模型的影响。根据实际系统性能与用户排名之间的相关性,我们量化用户对能力的理解。我们介绍了一个可解释的VQA系统,它使用空间和对象功能,并由BERT语言模型供电。每组用户只能看到一种解释来对VQA模型的能力进行排名。拟议的模型是通过对象的实验评估,以探讨解释对用户对用户能力感知的影响。两个VQA模型之间的比较显示了基于BERT的解释,并且对象特征的使用改善了用户对模型的能力的预测。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号