首页> 外文会议>International conference on artificial intelligence >Relational Reinforcement Rule Induction and the Effect of Pruning
【24h】

Relational Reinforcement Rule Induction and the Effect of Pruning

机译:关系强化规则的归纳和修剪的效果

获取原文

摘要

Covering Algorithm (CA) is a Machine Learning field that produces a powerful repository represented as simple if-then rules. Although this field is well established with discrete data but it has its deficiency when dealing with numeric data. This paper introduces a new algorithm called RULES-CONT, which deal with continuous attributes using Relational Reinforcement Learning (RRL). This algorithm is a non-discretization algorithm that directly deal with all kind of data and transfer past experience to improve and generalize its learning. Different pruning levels will also be tested with RULES-CONT, in order to study the possibility of using pruning with RRL to reduce the time and solve the speed problem. The main contribution is to propose a novel solution for rule induction and analyse the most effective pruning level that should be integrated with RRL. Several experiments will be presented to compare RULES-CONT with other algorithms. These experiments are validated using 10-cross-fold validation and Friedman test to measure the significance between the algorithms and decide on the most suitable pruning level with RRL.
机译:Covering Algorithm(CA)是机器学习领域,它产生一个功能强大的存储库,用简单的if-then规则表示。尽管此字段已使用离散数据很好地建立,但在处理数字数据时却有其不足之处。本文介绍了一种称为RULES-CONT的新算法,该算法使用关系强化学习(RRL)处理连续属性。该算法是一种非离散化算法,可以直接处理各种数据并转移过去的经验以改进和推广其学习。为了研究使用RRL进行修剪以减少时间并解决速度问题的可能性,还将使用RULES-CONT测试不同的修剪水平。主要贡献是为规则归纳提出了一种新颖的解决方案,并分析了应与RRL集成的最有效的修剪水平。将提出几个实验,以比较RULES-CONT和其他算法。使用10倍交叉验证和Friedman检验对这些实验进行了验证,以测量算法之间的显着性,并确定使用RRL的最合适修剪水平。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号