【24h】

Back to the Feature: A Neural-Symbolic Perspective on Explainable AI

机译:返回功能:关于可解释AI的神经符号视角

获取原文

摘要

We discuss a perspective aimed at making black box models more explainable, within the explainable AI (XAI) strand of research. We argue that the traditional end-to-end learning approach used to train Deep Learning (DL) models does not fit the tenets and aims of XAI. Going back to the idea of hand-crafted feature engineering, we suggest a hybrid DL approach to XAI: instead of employing end-to-end learning, we suggest to use DL for the automatic detection of meaningful, hand-crafted high-level symbolic features, which are then to be used by a standard and more interpretable learning model. We exemplify this hybrid learning model in a proof of concept, based on the recently proposed Kandinsky Patterns benchmark, that focuses on the symbolic learning part of the pipeline by using both Logic Tensor Networks and interpretable rule ensembles. After showing that the proposed methodology is able to deliver highly accurate and explainable models, we then discuss potential implementation issues and future directions that can be explored.
机译:在可解释的AI(XAI)研究范围内,我们讨论了一种旨在使黑匣子模型更易于解释的观点。我们认为,用于训练深度学习(DL)模型的传统端到端学习方法不符合XAI的宗旨和目标。回到手工制作特征工程的想法,我们建议对XAI使用混合DL方法:我们建议使用DL来自动检测有意义的手工制作的高级符号,而不是采用端到端学习功能,然后将由标准的和更易于解释的学习模型使用。我们基于最近提出的Kandinsky模式基准测试,在概念证明中举例说明了这种混合学习模型,该基准测试通过使用逻辑张量网络和可解释的规则集合来关注管道的符号学习部分。在证明了所提出的方法能够提供高度准确和可解释的模型之后,我们然后讨论了潜在的实施问题和可以探索的未来方向。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号