【24h】

Towards Complementary Explanations Using Deep Neural Networks

机译:使用深度神经网络进行补充解释

获取原文
获取原文并翻译 | 示例

摘要

Interpretability is a fundamental property for the acceptance of machine learning models in highly regulated areas. Recently, deep neural networks gained the attention of the scientific community due to their high accuracy in vast classification problems. However, they are still seen as black-box models where it is hard to understand the reasons for the labels that they generate. This paper proposes a deep model with monotonic constraints that generates complementary explanations for its decisions both in terms of style and depth. Furthermore, an objective framework for the evaluation of the explanations is presented. Our method is tested on two biomedical datasets and demonstrates an improvement in relation to traditional models in terms of quality of the explanations generated.
机译:可解释性是在高度管制的地区接受机器学习模型的基本属性。近年来,由于深度神经网络在巨大的分类问题中具有很高的准确性,因此引起了科学界的关注。但是,它们仍然被视为黑匣子模型,很难理解它们生成标签的原因。本文提出了一种具有单调约束的深度模型,该模型针对其决策在样式和深度方面产生了补充性解释。此外,提出了评估解释的客观框架。我们的方法在两个生物医学数据集上进行了测试,并证明了相对于传统模型而言所产生解释的质量方面的改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号