首页> 外文会议>Annual conference on Neural Information Processing Systems >Learning Multiple Models via Regularized Weighting
【24h】

Learning Multiple Models via Regularized Weighting

机译:通过正则化加权学习多种模型

获取原文
获取外文期刊封面目录资料

摘要

We consider the general problem of Multiple Model Learning (MML) from data, from the statistical and algorithmic perspectives; this problem includes clustering, multiple regression and subspace clustering as special cases. A common approach to solving new MML problems is to generalize Lloyd's algorithm for clustering (or Expectation-Maximization for soft clustering). However this approach is unfortunately sensitive to outliers and large noise: a single exceptional point may take over one of the models. We propose a different general formulation that seeks for each model a distribution over data points; the weights are regularized to be sufficiently spread out. This enhances robustness by making assumptions on class balance. We further provide generalization bounds and explain how the new iterations may be computed efficiently. We demonstrate the robustness benefits of our approach with some experimental results and prove for the important case of clustering that our approach has a non-trivial breakdown point, i.e., is guaranteed to be robust to a fixed percentage of adversarial unbounded outliers.
机译:从统计和算法视角,我们考虑多种模型学习(MML)的一般问题;此问题包括聚类,多元回归和子空间群集作为特殊情况。解决新问题,MML一种常见的方法是推广劳埃德的聚类算法(或期望最大化的软聚类)。然而,这种方法对异常值和大噪声很敏感:单个卓越点可能接管其中一个模型。我们提出了一种不同的一般制定,为每个模型提供数据点的分布;重量正则化为足以展开。这通过对班级余额进行假设来增强鲁棒性。我们进一步提供泛化界限,并解释如何有效地计算新的迭代。我们展示了我们对方法的稳健效益,并证明了集群的重要案例,即我们的方法具有非琐碎的击穿点,即,保证对固定的对抗性无限异常值的百分比具有稳健。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号