首页> 外文会议>Data Mining Workshops, ICDMW, 2008 IEEE International Conference on >Comparing Accuracies of Rule Evaluation Models to Determine Human Criteria on Evaluated Rule Sets
【24h】

Comparing Accuracies of Rule Evaluation Models to Determine Human Criteria on Evaluated Rule Sets

机译:比较规则评估模型的准确性,以确定评估规则集上的人类标准

获取原文

摘要

In datamining post-processing, rule selection using objective rule evaluation indices is one of a useful method to find out valuable knowledge from mined patterns. However, the relationship between an index value and experts' criteria has never been clarified. In this study, we have compared the accuracies of classification learning algorithms for datasets with randomized class distributions and real human evaluations. As a method to determine the relationship, we used rule evaluation models, which are learned from a dataset consisting of objective rule evaluation indices and evaluation labels for each rule. Then, the results show that accuracies of classification learning algorithms with/without criteria of human experts are different on a balanced randomized class distribution. With regarding to the results, we can consider about a way to distinguish randomly evaluated rules using the accuracies of multiple learning algorithms.
机译:在数据挖掘后处理中,使用客观规则评估指标进行规则选择是从挖掘的模式中找出有价值的知识的一种有用方法。但是,指标值与专家标准之间的关系尚未阐明。在这项研究中,我们比较了分类学习算法对具有随机分类分布和真实人类评估结果的数据集的准确性。作为确定关系的一种方法,我们使用了规则评估模型,该模型是从包含目标规则评估指数和每个规则评估标签的数据集中学习的。然后,结果表明,在平衡的随机类分布上,具有/不具有人类专家标准的分类学习算法的准确性是不同的。关于结果,我们可以考虑使用多种学习算法的准确性来区分随机评估规则的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号