首页> 外文会议>International Joint Conference on Neural Networks >Towards Best Practice in Explaining Neural Network Decisions with LRP
【24h】

Towards Best Practice in Explaining Neural Network Decisions with LRP

机译:寻求使用LRP解释神经网络决策的最佳实践

获取原文

摘要

Within the last decade, neural network based predictors have demonstrated impressive — and at times superhuman — capabilities. This performance is often paid for with an intransparent prediction process and thus has sparked numerous contributions in the novel field of explainable artificial intelligence (XAI). In this paper, we focus on a popular and widely used method of XAI, the Layer-wise Relevance Propagation (LRP). Since its initial proposition LRP has evolved as a method, and a best practice for applying the method has tacitly emerged, based however on humanly observed evidence alone. In this paper we investigate — and for the first time quantify — the effect of this current best practice on feedforward neural networks in a visual object detection setting. The results verify that the layer-dependent approach to LRP applied in recent literature better represents the model’s reasoning, and at the same time increases the object localization and class discriminativity of LRP.
机译:在过去的十年中,神经网络的基于网络的预测因子已经表现出令人印象深刻的 - 并且有时的超人能力。这种性能通常用于内透预测过程,因此引发了众多可解释的人工智能(XAI)的新领域贡献。在本文中,我们专注于一种流行且广泛使用的Xai方法,层面相关性传播(LRP)。由于其初始命题LRP已经发展为一种方法,并且基于单独的人类观察证据,因此默认出现了施加该方法的最佳实践。在本文中,我们调查 - 并且首次调查 - 目前在视觉对象检测设置中的前馈神经网络上的最佳实践的效果。结果验证了最近文献中应用的LRP的层依赖性方法更好地代表了模型的推理,同时增加了LRP的对象本地化和类别鉴别性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号