首页> 外文会议>International Conference for Phoenixes on Emerging Current Trends in Engineering and Management >Demystifying Black Box Models with Neural Networks for Accuracy and Interpretability of Supervised Learning
【24h】

Demystifying Black Box Models with Neural Networks for Accuracy and Interpretability of Supervised Learning

机译:用神经网络搅拌的黑匣子型号,以获得监督学习的准确性和可解释性

获取原文

摘要

Intensive data modelling on large datasets that were once limited to supercomputers and workstations can now be performed on desktop computers with scripting languages such as R and Python. Analytics, a field that is popular over this aspect of access to high computational capability enables people to try out different mathematical algorithms to derive the most precise values with just calling pre-written libraries. This precision in the case of black box models such as Neural Networks and Support Vector Machine comes at the cost of interpretability. Specifically, the importance of interpretability is realized while building classification models where understanding how a Neural Network functions in solving a problem is as important as deriving precision in values. The Path Break Down Approach proposed in this paper assists in demystifying the functioning of a Neural Network model in solving a classification and prediction problem based on the San Francisco crime dataset.
机译:在诸如超级计算机和工作站的大型数据集上的密集数据建模现在可以在桌面计算机上执行具有R和Python等脚本语言的桌面计算机。 分析,一个流行于访问高计算能力的这个方面的领域使人们能够尝试不同的数学算法,以推导最精确的值,只需调用预先写的库。 这种精度在黑匣子型号如神经网络和支持向量机等的情况下以可解释性的成本。 具体地,在构建分类模型的同时实现可解释性的重要性,其中理解解决问题的神经网络功能是如何导出价值精度的重要性。 本文提出的路径分解方法有助于揭开神经网络模型在解决基于旧金山犯罪数据集的分类和预测问题时的功能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号