首页> 外文OA文献 >A Comparison Study on Rule Extraction from Neural Network Ensembles, Boosted Shallow Trees, and SVMs
【2h】

A Comparison Study on Rule Extraction from Neural Network Ensembles, Boosted Shallow Trees, and SVMs

机译:神经网络集合,提升浅树和SVM的规则提取的比较研究

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

One way to make the knowledge stored in an artificial neural network more intelligible is to extract symbolic rules. However, producing rules from Multilayer Perceptrons (MLPs) is an NP-hard problem. Many techniques have been introduced to generate rules from single neural networks, but very few were proposed for ensembles. Moreover, experiments were rarely assessed by 10-fold cross-validation trials. In this work, based on the Discretized Interpretable Multilayer Perceptron (DIMLP), experiments were performed on 10 repetitions of stratified 10-fold cross-validation trials over 25 binary classification problems. The DIMLP architecture allowed us to produce rules from DIMLP ensembles, boosted shallow trees (BSTs), and Support Vector Machines (SVM). The complexity of rulesets was measured with the average number of generated rules and average number of antecedents per rule. From the 25 used classification problems, the most complex rulesets were generated from BSTs trained by “gentle boosting” and “real boosting.” Moreover, we clearly observed that the less complex the rules were, the better their fidelity was. In fact, rules generated from decision stumps trained by modest boosting were, for almost all the 25 datasets, the simplest with the highest fidelity. Finally, in terms of average predictive accuracy and average ruleset complexity, the comparison of some of our results to those reported in the literature proved to be competitive.
机译:使存储在人工神经网络中的知识更可理解的一种方法是提取符号规则。然而,从多层者(MLP)产生规则是NP难题。已经引入了许多技术来从单一神经网络生成规则,但是很少被提出用于合奏。此外,通过10倍的交叉验证试验很少评估实验。在这项工作中,基于离散化的可解释的多层的Multerdayerctron(DIMLP),在25个二进制分类问题上进行10倍的分层10倍交叉验证试验进行实验。 DIMLP架构允许我们从DIMLP集合,提升浅树(BSTS)和支持向量机(SVM)中生成规则。使用平均生成规则的数量和每个规则的平均前消方数量来衡量规则集的复杂性。从25次使用的分类问题中,最复杂的规则集是由“温和提升”和“真实提升”训练的BST。此外,我们清楚地观察到规则的复杂程度越少,富力效率越好。事实上,对于几乎所有25个数据集,由适度提升训练的决策树桩产生的规则是最高保真度的最简单。最后,就平均预测准确性和平均规则特性的复杂性而言,我们对文献中报告的人的一些结果的比较被证明是有竞争力的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号