首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >Network Dissection: Quantifying Interpretability of Deep Visual Representations
【24h】

Network Dissection: Quantifying Interpretability of Deep Visual Representations

机译:网络解剖:量化深度视觉表示的可解释性

获取原文

摘要

We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.
机译:我们提出了一个称为“网络剖析”的通用框架,用于通过评估单个隐藏单元和一组语义概念之间的对齐方式来量化CNN潜在表示的可解释性。在给定任何CNN模型的情况下,所提出的方法都使用一组概念数据来对每个中间卷积层的隐藏单元的语义评分。具有语义的单元在广泛的视觉概念上进行标记,包括对象,零件,场景,纹理,材料和颜色。我们使用提出的方法来检验可解释性是表示空间的轴独立属性的假设,然后我们将该方法用于在训练为解决不同分类问题时比较各种网络的潜在表示。我们进一步分析训练迭代的影响,比较训练有不同初始化的网络,并测量辍学和批归一化对深度视觉表示的可解释性的影响。我们证明了所提出的方法可以揭示CNN模型的特征以及训练方法的判别能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号