首页> 美国政府科技报告 >Stability of Unstable Learning Algorithms
【24h】

Stability of Unstable Learning Algorithms

机译:不稳定学习算法的稳定性

获取原文

摘要

We introduce a formalism called graphical learning algorithms and use it to produce bounds on error deviance for unstable learning algorithms. This formalism suggests a flexible class of extensions of existing algorithms for which risk can be decomposed into algorithmic model risk plus estimation error in a way that enables bounds on estimation error and analysis of the algorithmic model risk. For example we obtain error deviance bounds for support vector machines (SVMs) with variable offset parameter and estimation error bounds for variations of SVM where the offset parameter is selected to minimize empirical risk. In addition we prove convergence to the Bayes error for variations of SVM that use a universal kernel and choose the regularization parameter to minimize empirical error. We provide experimental results that suggest that these variations may offer advantages over standard SVMs in both computation and generalization performance.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号