...
首页> 外文期刊>Wiley interdisciplinary reviews. Data mining and knowledge discovery >Model selection and error estimation without the agonizing pain
【24h】

Model selection and error estimation without the agonizing pain

机译:没有痛苦的痛苦的模型选择和误差估计

获取原文
           

摘要

>How can we select the best performing data‐driven model? How can we rigorously estimate its generalization error? Statistical learning theory (SLT) answers these questions by deriving nonasymptotic bounds on the generalization error of a model or, in other words, by delivering upper bounding of the true error of the learned model based just on quantities computed on the available data. However, for a long time, SLT has been considered only as an abstract theoretical framework, useful for inspiring new learning approaches, but with limited applicability to practical problems. The purpose of this review is to give an intelligible overview of the problems of model selection (MS) and error estimation (EE), by focusing on the ideas behind the different SLT‐based approaches and simplifying most of the technical aspects with the purpose of making them more accessible and usable in practice. We start by presenting the seminal works of the 80s until the most recent results, then discuss open problems and finally outline future directions of this field of research. > This article is categorized under: Technologies Statistical Fundamentals Algorithmic Development Statistics
机译: >我们如何选择最佳的数据驱动模型?我们如何严格估计其泛化误差?统计学习理论(SLT)通过在模型的泛化误差上导出展开误差或换句话说,通过在可用数据上计算的数量来提供基于所学习模型的真实误差的上限来回答这些问题。然而,长期以来,SLT被认为只是一种抽象的理论框架,可用于鼓励新的学习方法,但对实际问题的适用性有限。本次审查的目的是通过专注于基于SLT的方法背后的想法并简化具有目的的大部分技术方面,介绍模型选择(MS)和错误估计(EE)问题的可理解概述使它们在实践中更可访问和可用。我们首先展示80年代的开场工程,直到最近的结果,然后讨论打开问题,最后概述了这一研究领域的未来方向。 > 本文分类为: 技术&统计基础 算法开发&统计

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号