【24h】

Interpretable Deep Learning based Risk Evaluation Approach

机译:基于风险评估方法的可解释的深度学习

获取原文

摘要

There are two goals of modeling, including interpretation that is to extract information about how the response variables are associated to the input variables, and prediction that is to predict what the responses are going to be. The dilemma is that interpretable algorithms such as linear regression or logistic regression are often not accurate for prediction, while complex algorithms for better prediction are much more accurate but not easy to interpret1. Risk could be in the forms of cyber security risk, credit risk, investment risk, operational risk, etc. In this paper, we propose an interpretable method in evaluating risk using Deep Learning.
机译:建模有两个目标,包括解释,即提取有关响应变量如何与输入变量相关联的信息,以及预测要预测响应将是什么。困境是这种可解释的算法,例如线性回归或逻辑回归通常不准确地进行预测,而复杂的算法对于更好的预测是更准确但不容易解释1。在本文中,风险可能是网络安全风险,信用风险,投资风险,运营风险等。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号