【24h】

Benchmarking Exploratory OLAP

机译:基准测试探索olap

获取原文

摘要

Supporting interactive database exploration (IDE) is a problem that attracts lots of attention these days. Exploratory OLAP (On-Line Analytical Processing) is an important use case where tools support navigation and analysis of the most interesting data, using the best possible perspectives. While many approaches were proposed (like query recommendation, reuse, steering, personalization or unexpected data recommendation), a recurrent problem is how to assess the effectiveness of an exploratory OLAP approach. In this paper we propose a benchmark framework to do so, that relies on an extensible set of user-centric metrics that relate to the main dimensions of exploratory analysis. Namely, we describe how to model and simulate user activity, how to formalize our metrics and how to build exploratory tasks to properly evaluate an IDE system under test (SUT). To the best of our knowledge, this is the first proposal of such a benchmark. Experiments are two-fold: first we evaluate the benchmark protocol and metrics based on synthetic SUTs whose behavior is well known. Second, we concentrate on two different recent SUTs from IDE literature that are evaluated and compared with our benchmark. Finally, potential extensions to produce an industry-strength benchmark are listed in the conclusion.
机译:支持交互式数据库探索(IDE)是这些天吸引了很多关注的问题。探索性OLAP(在线分析处理)是一种重要的用例,工具支持导航和分析最有趣的数据,使用最佳的观点。虽然提出了许多方法(如查询推荐,重用,转向,个性化或意外数据建议),但反复问题是如何评估探索OLAP方法的有效性。在本文中,我们提出了一个基准框架来执行此操作,这依赖于与探索性分析的主要尺寸相关的可扩展的用户中心度量。即,我们介绍如何模拟和模拟用户活动,如何正式化我们的指标以及如何构建探索性任务以正确评估被测IDE系统(SUT)。据我们所知,这是第一个基准的第一个提议。实验是两倍:首先,我们根据其行为是众所周知的合成SUT来评估基准协议和指标。其次,我们专注于两种不同最近的Suts,从IDE文学与我们的基准进行评估。最后,在结论中列出了产生行业力量基准的潜在延伸。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号