首页> 外文会议>International Conference on Knowledge-Based Intelligent Information and Engineering Systems >Human Knowledge in Constructing AI Systems - Neural Logic Networks Approach towards an Explainable AI
【24h】

Human Knowledge in Constructing AI Systems - Neural Logic Networks Approach towards an Explainable AI

机译:人类知识构建AI系统 - 神经逻辑网络朝向可解释的AI

获取原文

摘要

To build an easy-to-use AI and ML, it is crucial to gain user's trust. Trust comes from understanding the reasoning behind an AI system's conclusions and results. The recent research efforts on Explainable AI (XAI) reflect the importance of explainability in responding to the criticism of "black box" type of AI. Neural Logic Networks (NLN) is a research to embed logic reasoning (being binary or fuzzy) to connectionist models having humans' domain knowledge taken into consideration. The reasoning carried out on such network structures allows possible interpretation beyond binary logic. This article intends to discuss the potential contribution of NLN approach in making reasoning more explainable.
机译:要构建易于使用的AI和ML,它至关重要,以获得用户的信任。信任来自理解AI系统的结论和结果背后的推理。最近关于可解释的AI(XAI)的研究努力反映了释放对响应“黑匣子”类型的批判时解释性的重要性。神经逻辑网络(NLN)是将逻辑推理(二进制或模糊)嵌入到具有人类域知识的连接主义模型的研究。在这种网络结构上执行的推理允许可能的解释超出二进制逻辑。本文打算讨论NLN方法的潜在贡献,使推理更加解释。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号