首页> 外文期刊>Expert Systems with Application >Inducing non-orthogonal and non-linear decision boundaries in decision trees via interactive basis functions
【24h】

Inducing non-orthogonal and non-linear decision boundaries in decision trees via interactive basis functions

机译:通过交互基函数在决策树中诱导非正交和非线性决策边界

获取原文
获取原文并翻译 | 示例

摘要

Decision Trees (DTs) are a machine learning technique widely used for regression and classification purposes. Conventionally, the decision boundaries of Decision Trees are orthogonal to the features under consideration. A well-known limitation of this is that the algorithm may fail to find optimal partitions, or in some cases any partitions at all, depending on the underlying distribution of the data. To remedy this limitation, several modifications have been proposed that allow for oblique decision boundaries. The objective of this paper is to propose a new strategy for generating flexible decision boundaries by means of interactive basis functions (IBFs). We show how oblique decision boundaries can be obtained as a particular case of IBFs, and in addition how non-linear decision boundaries can be induced. One attractive aspect of the strategy proposed in this paper is that training Decision Trees with IBFs does not require custom software, since the functions can be precalculated for use in any existing implementation of the algorithm. Since the underlying mechanisms remain unchanged there is no substantial computational overhead compared to conventional trees. Furthermore, this also means that IBFs can be used in any extensions of the Decision Tree algorithm, such as evolutionary trees, boosting, and bagging. We conduct a benchmarking exercise to understand under which conditions the use of IBFs can improve model the performance. In addition, we present three empirical applications that illustrate the approach in classification and regression. As part of discussing the empirical applications, we introduce a device called decision charts to facilitate the interpretation of DTs with IBFs. Finally, we conclude the paper by outlining some directions for future research. (C) 2018 Elsevier Ltd. All rights reserved.
机译:决策树(DT)是一种广泛用于回归和分类目的的机器学习技术。传统上,决策树的决策边界与所考虑的特征正交。众所周知的限制是,该算法可能无法找到最佳分区,或者在某些情况下根本找不到任何分区,这取决于数据的基础分布。为了弥补这一限制,已经提出了允许倾斜决策边界的几种修改。本文的目的是提出一种通过交互式基础函数(IBF)生成灵活决策边界的新策略。我们展示了如何在IBF的特殊情况下获得倾斜的决策边界,以及如何诱导非线性决策边界。本文提出的策略的一个吸引人的方面是,使用IBF训练决策树不需要自定义软件,因为可以预先计算功能以用于该算法的任何现有实现中。由于基本机制保持不变,因此与常规树相比,没有实质性的计算开销。此外,这还意味着IBF可以用于决策树算法的任何扩展中,例如进化树,增强和装袋。我们进行了基准测试,以了解在哪些条件下使用IBF可以改善模型的性能。此外,我们提出了三个经验应用,它们说明了分类和回归方法。作为讨论经验应用的一部分,我们介绍了一种称为决策图的设备,以利于用IBF解释DT。最后,我们通过概述未来研究的一些方向来总结本文。 (C)2018 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号