首页> 外文会议>IEEE International Parallel and Distributed Processing Symposium Workshops >Cache-Aware Approximate Computing for Decision Tree Learning
【24h】

Cache-Aware Approximate Computing for Decision Tree Learning

机译:缓存意识到决策树学习的近似计算

获取原文

摘要

The memory performance of data mining applications became crucial due to increasing dataset sizes and multi-level cache hierarchies. Decision tree learning is one of the most important algorithms in this field, and numerous researchers worked on improving the accuracy of model tree as well as enhancing the overall performance of the learning process. Most modern applications that employ decision tree learning favor creating multiple models for higher accuracy by sacrificing performance. In this work, we exploit the flexibility inherent in decision tree learning based applications regarding performance and accuracy tradeoffs, and propose a framework to improve performance with negligible accuracy losses. This framework employs a data access skipping module (DASM) using which costly cache accesses are skipped according to the aggressiveness of the strategy specified by the user and a heuristic to predict skipped data accesses to keep accuracy losses at minimum. Our experimental evaluation shows that the proposed framework offers significant performance improvements (up to 25%) with relatively much smaller losses in accuracy (up to 8%) over the original case. We demonstrate that our framework is scalable under various accuracy requirements via exploring accuracy changes over time and replacement policies. In addition, we explore NoC/SNUCA systems for similar opportunities of memory performance improvement.
机译:由于数据集大小和多级缓存层次结构的增加,数据挖掘应用程序的内存性能变得至关重要。决策树学习是该字段中最重要的算法之一,众多研究人员致力于提高模型树的准确性以及提高学习过程的整体性能。最现代的应用程序采用决策树学习,利用通过牺牲性能来创建多种模型以获得更高的准确性。在这项工作中,我们利用了决策树学习的灵活性基于基于性能和准确性权衡的应用程序,并提出了一种框架,以提高性能,具有可忽略的准确性损失。该框架采用数据访问跳过模块(DASM)使用根据用户指定的策略的攻击性和启发式跳过的速度缓存访问的数据访问跳过模块(DASM),以预测跳过数据访问以至少最小保持精度损耗。我们的实验评估表明,所提出的框架提供了显着的性能改善(最多25%),在原始案件中的准确性(高达8%)的损失相对较小。我们证明我们的框架通过时间和更换策略来探索精度变化,我们的框架在各种准确性要求下可扩展。此外,我们探索了NOC / SNUCA系统,了解类似内存性能改进的机会。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号