首页> 外文期刊>Procedia Computer Science >Human Knowledge in Constructing AI Systems — Neural Logic Networks Approach towards an Explainable AI
【24h】

Human Knowledge in Constructing AI Systems — Neural Logic Networks Approach towards an Explainable AI

机译:构建AI系统中的人类知识-可解释AI的神经网络方法

获取原文
           

摘要

To build an easy-to-use AI and ML, it is crucial to gain user’s trust. Trust comes from understanding the reasoning behind an AI system’s conclusions and results. The recent research efforts on Explainable AI (XAI) reflect the importance of explainability in responding to the criticism of “black box” type of AI. Neural Logic Networks (NLN) is a research to embed logic reasoning (being binary or fuzzy) to connectionist models having humans’ domain knowledge taken into consideration. The reasoning carried out on such network structures allows possible interpretation beyond binary logic. This article intends to discuss the potential contribution of NLN approach in making reasoning more explainable.
机译:要构建易于使用的AI和ML,赢得用户的信任至关重要。信任来自对AI系统结论和结果背后原因的理解。可解释性AI(XAI)的最新研究工作反映了可解释性在回应对“黑匣子”型AI的批评中的重要性。神经逻辑网络(NLN)是一项将逻辑推理(二进制或模糊)嵌入到考虑了人类领域知识的连接主义模型中的研究。在这样的网络结构上进行的推理使得可能的解释超出了二进制逻辑。本文旨在讨论NLN方法在使推理更容易解释方面的潜在贡献。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号