首页> 外国专利> Methods for explainability of deep-learning models

Methods for explainability of deep-learning models

机译:深度学习模型的可解释性方法

摘要

Embodiments are disclosed for health assessment and diagnosis implemented in an artificial intelligence (AI) system. In an embodiment, a method comprises: feeding a first set of input features to the AI model; obtaining a first set of raw output predictions from the model; determining a first set of impact scores for the input features fed into the model; training a neural network with the first set of impact scores as input to the network and pre-determined sentences describing the model's behavior as output; feeding a second set of input features to the AI model; obtaining a second set of raw output predictions from the model; determining a second set of impact scores based on the second set of output predictions; feeding the second set of impact scores to the neural network; and generating a sentence describing the AI model's behavior on the second set of input features.
机译:公开了用于在人工智能(AI)系统中实现的健康评估和诊断的实施例。在一个实施例中,一种方法包括:将第一组输入特征馈送到所述AI模型;从模型获得第一组原始输出预测;为输入到模型中的输入特征确定第一组冲击得分;训练神经网络,将第一组冲击得分作为对网络的输入,并将描述模型行为的预定语句作为输出;将第二组输入特征提供给AI模型;从模型获得第二组原始输出预测;基于第二组输出预测来确定第二组冲击得分;向神经网络提供第二组冲击得分;并在第二组输入要素上生成描述AI模型行为的句子。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号