首页> 外文会议>International Conference on Machine Learning and Artificial Intelligence >Explainable AI: Using Shapley Value to Explain Complex Anomaly Detection ML-Based Systems
【24h】

Explainable AI: Using Shapley Value to Explain Complex Anomaly Detection ML-Based Systems

机译:可解释的AI:使用福利值来解释复杂的异常检测ML的系统

获取原文

摘要

Generally, Artificial Intelligence (AI) algorithms are unable to account for the logic of each decision they take during the course of arriving at a solution. This "black box" problem limits the usefulness of AI in military, medical, and financial security applications, among others, where the price for a mistake is great and the decision-maker must be able to monitor and understand each step along the process. In our research, we focus on the application of Explainable AI for log anomaly detection systems of a different kind. In particular, we use the Shapley value approach from cooperative game theory to explain the outcome or solution of two anomaly-detection algorithms: Decision tree and DeepLog. Both algorithms come from the machine learning-based log analysis toolkit for the automated anomaly detection "Loglizer". The novelty of our research is that by using the Shapley value and special coding techniques we managed to evaluate or explain the contribution of both a single event and a grouped sequence of events of the Log for the purposes of anomaly detection. We explain how each event and sequence of events influences the solution, or the result, of an anomaly detection system.
机译:通常,人工智能(AI)算法无法解释他们在到达解决方案的过程中所采取的每个决定的逻辑。这个“黑匣子”问题会限制AI在军事,医疗和金融安全应用程序中的有用性,其中,错误的价格是伟大的,决策者必须能够沿着该过程监测和理解每一步。在我们的研究中,我们专注于可解释的AI用于不同类型的日志异常检测系统的应用。特别是,我们使用合作博弈论的福利价值方法来解释两种异常检测算法的结果或解决方案:决策树和DEEPLOG。这两种算法都来自基于机器学习的日志分析工具包,用于自动异常检测“loglizer”。我们的研究新颖之处在于,通过使用福族价值和特殊编码技术,我们设法评估或解释单个事件的贡献和日志的日志事件的贡献,以便出于异常检测的目的。我们解释了每种事件和事件序列如何影响异常检测系统的解决方案或结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号