首页> 外文会议>Pacific Rim international conference on artificial intelligence >Explaining Deep Learning Models with Constrained Adversarial Examples
【24h】

Explaining Deep Learning Models with Constrained Adversarial Examples

机译:用约束对抗示例解释深度学习模型

获取原文
获取外文期刊封面目录资料

摘要

Machine learning algorithms generally suffer from a problem of explainability. Given a classification result from a model, it is typically hard to determine what caused the decision to be made, and to give an informative explanation. We explore a new method of generating counterfactual explanations, which instead of explaining why a particular classification was made explain how a different outcome can be achieved. This gives the recipients of the explanation a better way to understand the outcome, and provides an actionable suggestion. We show that the introduced method of Constrained Adversarial Examples (CADEX) can be used in real world applications, and yields explanations which incorporate business or domain constraints such as handling categorical attributes and range constraints.
机译:机器学习算法通常存在可解释性的问题。给定模型的分类结果,通常很难确定是什么原因导致做出决策,并难以提供有益的解释。我们探索产生反事实解释的新方法,而不是解释为什么进行特定分类,而是解释如何实现不同的结果。这为解释的接受者提供了一种更好的理解结果的方法,并提供了可行的建议。我们表明,引入的约束对抗示例(CADEX)方法可以在现实世界的应用程序中使用,并产生包含业务或领域约束(例如,处理分类属性和范围约束)的解释。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号