首页> 外文期刊>Knowledge and information systems >The (black) art of runtime evaluation: Are we comparing algorithms or implementations?
【24h】

The (black) art of runtime evaluation: Are we comparing algorithms or implementations?

机译:运行时评估(黑)艺术:我们是否比较算法或实现?

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Any paper proposing a new algorithm should come with an evaluation of efficiency and scalability (particularly when we are designing methods for "big data"). However, there are several (more or less serious) pitfalls in such evaluations. We would like to point the attention of the community to these pitfalls. We substantiate our points with extensive experiments, using clustering and outlier detection methods with and without index acceleration. We discuss what we can learn from evaluations, whether experiments are properly designed, and what kind of conclusions we should avoid. We close with some general recommendations but maintain that the design of fair and conclusive experiments will always remain a challenge for researchers and an integral part of the scientific endeavor.
机译:任何提出新算法的论文都应该评估效率和可扩展性(特别是当我们在设计“大数据”的方法时)。 但是,在这种评估中有几种(或多或少严重)陷阱。 我们想指出社区的注意力为这些陷阱。 我们通过广泛的实验证实了我们的积分,使用群集和异常值检测方法,无索引加速度。 我们讨论我们可以从评估中学习的内容,是否正确设计实验,以及我们应该避免的结论是什么样的结论。 我们与一些一般性建议结束,但保持公平和确凿实验的设计将始终是研究人员的挑战和科学努力的一个组成部分。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号