首页> 外文会议>Machine learning >Bias Plus Variance Decomposition for Zero-One Loss Functions
【24h】

Bias Plus Variance Decomposition for Zero-One Loss Functions

机译:零一损失函数的偏置加方差分解

获取原文
获取原文并翻译 | 示例

摘要

We present a bias-variance decomposition of expected misclassification rate, the most commonly used loss function in supervised classification learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was offered for the more commonly used zero-one (misclassification) loss functions until the recent work of Kong & Dietterich (1995) and Breiman (1996). Their decomposition suffers from some major shortcomings though (e.g., potentially negative variance), which our decomposition avoids. We show that, in practice, the naive frequency-based estimation of the decomposition terms is by itself biased and show how to correct for this bias. We illustrate the decomposition on various algorithms and datasets from the UCI repository.
机译:我们提出了预期误分类率的偏差方差分解,这是监督分类学习中最常用的损失函数。二次损失函数的偏差方差分解是众所周知的,并且是分析学习算法的重要工具,但直到Kong&Dietterich( (1995年)和布雷曼(1996年)。尽管它们的分解存在一些主要缺点(例如潜在的负方差),但我们的分解可以避免这些缺点。我们表明,在实践中,分解项的基于朴素频率的估计本身具有偏差,并说明了如何纠正此偏差。我们说明了UCI存储库中各种算法和数据集的分解。

著录项

  • 来源
    《Machine learning》|1996年|275-283|共9页
  • 会议地点 Bari(IT);Bari(IT)
  • 作者

    Ron Kohavi; David H. Wolpert;

  • 作者单位

    Data Mining and Visualization Silicon Graphics, Inc. 2011 N. Shoreline Blvd Mountain View, CA 94043-1389;

    The Santa Fe Institute 1399 Hyde Park Rd. Santa Fe, NM 87501;

  • 会议组织
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 计算机的应用;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号