Gaussian kernels with flexible variances provide arich family of Mercer kernels for learning algorithms. We show thatthe union of the unit balls of reproducing kernel Hilbert spacesgenerated by Gaussian kernels with flexible variances is a uniformGlivenko-Cantelli (uGC) class. This result confirms a conjectureconcerning learnability of Gaussian kernels and verifies the uniformconvergence of many learning algorithms involving Gaussians withchanging variances. Rademacher averages and empirical coveringnumbers are used to estimate sample errors of multi-kernelregularization schemes associated with general loss functions. It isthen shown that the regularization error associated with the leastsquare loss and the Gaussian kernels can be greatly improved whenflexible variances are allowed. Finally, for regularization schemesgenerated by Gaussian kernels with flexible variances we presentexplicit learning rates for regression with least square loss andclassification with hinge loss. color="gray">
展开▼
机译:具有灵活方差的高斯核为学习算法提供了丰富的Mercer核族。我们证明由具有可变方差的高斯核生成的再生核希尔伯特空间的单位球的并集是一个统一的Glivenko-Cantelli(uGC)类。这一结果证实了关于高斯核的可学习性的一个猜想,并验证了许多涉及方差不断变化的高斯学习算法的一致收敛性。 Rademacher平均值和经验覆盖数用于估计与一般损失函数相关的多核正则化方案的样本误差。结果表明,在允许灵活方差的情况下,与最小二乘损失和高斯核相关的正则化误差可以大大改善。最后,对于由具有可变方差的高斯核生成的正则化方案,我们给出了具有最小平方损失的回归和具有铰链损失的分类的显式学习率。 color =“ gray”>
展开▼