首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition Workshops >Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks
【24h】

Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks

机译:解释未解释的:一个课堂增强的细心反应(明确)方法来了解深度神经网络

获取原文

摘要

In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an approach to visualize and understand the decisions made by deep neural networks (DNNs) given a specific input. CLEAR facilitates the visualization of attentive regions and levels of interest of DNNs during the decision-making process. It also enables the visualization of the most dominant classes associated with these attentive regions of interest. As such, CLEAR can mitigate some of the shortcomings of heatmap-based methods associated with decision ambiguity, and allows for better insights into the decision-making process of DNNs. Quantitative and qualitative experiments across three different datasets demonstrate the efficacy of CLEAR for gaining a better understanding of the inner workings of DNNs during the decision-making process.
机译:在这项工作中,我们提出了课堂增强的细心响应(明确):一种可视化和理解给定特定输入的深神经网络(DNN)所作的决定的方法。清楚有助于在决策过程中促进关注区域的可视化和DNN的兴趣水平。它还能够可视化与这些感兴趣的细节区域相关的最占主导地位。因此,清楚可以减轻与决策歧义相关的基于热爱的方法的一些缺点,并允许更好地了解DNN的决策过程。三种不同数据集的定量和定性实验证明了清楚地了解在决策过程中更好地了解DNN的内部工作的功效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号