首页> 外文会议>IEEE International Conference on Machine Learning and Applications >Enhancing Decision Tree Based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization
【24h】

Enhancing Decision Tree Based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization

机译:通过L1正交正则化增强基于决策树的深度神经网络解释

获取原文

摘要

One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.
机译:迄今为止,主要在关键领域阻止引入机器学习模型的一个障碍是缺乏可解释性。在这项工作中,提出了一种实用的方法,该方法使用基于决策树的可解释替代模型来获得深层人工神经网络(NN)的可解释性。简单地将决策树拟合到训练有素的NN通常会导致准确性和保真度方面的令人满意的结果。但是,在训练过程中使用L1正交正则化可保留NN的准确性,同时可以通过小的决策树对其进行近似估计。使用不同数据集进行的测试证实,与其他正则化器相比,L1正交正则化产生的模型复杂度较低,同时保真度更高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号