【24h】

Evaluating CNN Interpretabilty on Sketch Classification

机译:在草图分类中评估CNN的可解释性

获取原文

摘要

While deep neural networks (DNNs) have been shown to outperform humans on many vision tasks, theirintransparent decision making process inhibits wide-spread uptake, especially in high-risk scenarios. The BagNetarchitecture was designed to learn visual features that are easier to explain than the feature representation of otherconvolutional neural networks (CNNs). Previous experiments with BagNet were focused on natural images providingrich texture and color information. In this paper, we investigate the performance and interpretability of BagNet on a dataset of human sketches, i.e., a data set with limited color and no texture information. We also introduce a heatmapinterpretability score (HI score) to quantify model interpretability and present a user study to examine BagNetinterpretability from user perspective. Our results show that BagNet is by far the most interpretable CNN architecture inour experiment setup based on the HI score.
机译:虽然深度神经网络(DNN)在许多视觉任务上已表现出超越人类的能力,但它们的 不透明的决策过程会抑制广泛采用,特别是在高风险情况下。袋网 该体系结构旨在学习比其他特征表示更易于解释的视觉特征 卷积神经网络(CNN)。 BagNet先前的实验主要集中在提供自然图像上 丰富的纹理和颜色信息。在本文中,我们根据数据研究BagNet的性能和可解释性 一组人类草图,即颜色有限且没有纹理信息的数据集。我们还介绍了一个热图 可解释性分数(HI分数)以量化模型的可解释性,并提供用户研究以检查BagNet 从用户角度讲的可解释性。我们的结果表明BagNet是迄今为止最可解释的CNN架构 我们基于HI得分的实验设置。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号