【24h】

Learning with a slowly changing distribution

机译:通过分布缓慢变化的学习

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we consider the problem of learning a subset of a domain from randomly chosen examples when the probability distribution of the examples changes slowly but continually throughout the learning process. We give upper and lower bounds on the best achievable probability of misclassification after a given number of examples. If d is the VC-dimension of the target function class, t is the number of examples, and &Ugr; is the amount by which the distribution is allowed to change (measured by the largest change in the probability of a subset of the domain), the upper bound decreases as d/t initially, and settles to O(d2/3&Ugr;1/2) for large t. These bounds give necessary and sufficient conditions on &Ugr;, the rate of change of the distribution of examples, to ensure that some learning algorithm can produce an acceptably small probability of misclassification. Wealso consider the case of learning a near-optimal subset of the domain when the examples and their labels are generated by a joint probability distribution on the example and label spaces. We give an upper bound on &Ugr; that ensures learning is possible from a finite number of examples.

机译:

在本文中,当示例的概率分布在整个学习过程中缓慢但连续变化时,我们考虑从随机选择的示例中学习域子集的问题。在给定数量的示例之后,我们给出了最大可能的错误分类概率的上限和下限。如果 d 是目标函数类的VC维,则 t 是示例数,而&Ugr;是目标函数类的VC维。是允许分配变化的量(通过域子集的概率的最大变化来衡量),上限最初以 d / t 的形式减小,并稳定为 O(d 2/3 &Ugr; 1/2 表示大的 t 。这些界限为样本分布的变化率&Ugr提供了必要和充分的条件,以确保某些学习算法可以产生可接受的较小的错误分类概率。当通过示例和标签空间上的联合概率分布生成示例及其标签时,我们还考虑了学习域的最佳子集的情况。我们对&Ugr;设置上限可以确保通过有限的示例进行学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号