首页> 外文会议>IEEE World Symposium on Applied Machine Intelligence and Informatics >Explaining Deep Neural Network using Layer-wise Relevance Propagation and Integrated Gradients
【24h】

Explaining Deep Neural Network using Layer-wise Relevance Propagation and Integrated Gradients

机译:使用层性相关性传播和集成梯度解释深神经网络

获取原文

摘要

Machine learning has become an integral part of technology in today's world. The field of artificial intelligence is the subject of research by a wide scientific community. In particular, through improved methodology, the availability of big data, and increased computing power, today's machine learning algorithms can achieve excellent performance that sometimes even exceeds the human level. However, due to their nested nonlinear structure, these models are generally considered to be “Black boxes” that do not provide any information about what exactly leads them to provide a specific output. This raised the need to interpret these algorithms and understand how they work as they are applied even in areas where they can cause critical damage. This article describes Integrated Gradients [1] and Layer-wise Relevance Propagation [2] methods and presents individual experiments with. In experiments we have used well-known datasets like MNIST[3], MNIST-Fashion dataset[4], Imagenette and Imagewoof which are subsets of ImageNet [5].
机译:机器学习已成为当今世界中技术的一个组成部分。人工智能领域是广泛的科学界研究的主题。特别是,通过改进的方法,大数据的可用性和增加的计算能力,今天的机器学习算法可以实现出色的性能,有时甚至超过人类水平。然而,由于它们嵌套的非线性结构,这些模型通常被认为是“黑匣子”,其不提供有关究竟引导它们提供特定输出的任何信息。这提出了解释这些算法的必要性,并了解它们如何在它们造成临界损坏的区域时使用。本文介绍了集成梯度[1]和层面相关性传播[2]方法,并呈现各个实验。在实验中,我们使用了众所周知的数据集,如Mnist [3],Mnist-Fashion数据集[4],ImageNette和ImageWoof,它们是ImageNet的子集[5]。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号