首页> 外文期刊>JMLR: Workshop and Conference Proceedings >To Understand Deep Learning We Need to Understand Kernel Learning
【24h】

To Understand Deep Learning We Need to Understand Kernel Learning

机译:要了解深度学习,我们需要了解内核学习

获取原文
           

摘要

Generalization performance of classifiers in deep learning has recently become a subject of intense study. Deep models, which are typically heavily over-parametrized, tend to fit the training data exactly. Despite this “overfitting", they perform well on test data, a phenomenon not yet fully understood. The first point of our paper is that strong performance of overfitted classifiers is not a unique feature of deep learning. Using six real-world and two synthetic datasets, we establish experimentally that kernel machines trained to have zero classification error or near zero regression error (interpolation) perform very well on test data. We proceed to give a lower bound on the norm of zero loss solutions for smooth kernels, showing that they increase nearly exponentially with data size. None of the existing bounds produce non-trivial results for interpolating solutions. We also show experimentally that (non-smooth) Laplacian kernels easily fit random labels, a finding that parallels results recently reported for ReLU neural networks. In contrast, fitting noisy data requires many more epochs for smooth Gaussian kernels. Similar performance of overfitted Laplacian and Gaussian classifiers on test, suggests that generalization is tied to the properties of the kernel function rather than the optimization process. Some key phenomena of deep learning are manifested similarly in kernel methods in the modern “overfitted" regime. The combination of the experimental and theoretical results presented in this paper indicates a need for new theoretical ideas for understanding properties of classical kernel methods. We argue that progress on understanding deep learning will be difficult until more tractable “shallow” kernel methods are better understood.
机译:分类器在深度学习中的泛化性能最近已成为深入研究的主题。深度模型通常会过度参数化,往往会精确拟合训练数据。尽管存在这种“过度拟合”,但它们在测试数据上仍然表现良好,这是一种尚未完全理解的现象。本文的第一点是,过度拟合分类器的强大性能并不是深度学习的独特功能,它使用了六个真实世界和两个合成数据集上,我们通过实验确定了训练有零分类误差或接近零回归误差(插值)的核机在测试数据上的表现非常好,我们对光滑核的零损失解范数给出了下界,表明它们随数据大小的增长几乎成倍增加,现有的边界都不会产生非平凡的插值结果;我们还通过实验表明,(非平滑的)拉普拉斯内核易于拟合随机标记,这一发现与最近报道的ReLU神经网络的结果相似。相比之下,拟合噪声数据需要更多的时间来获得光滑的高斯核,而过拟合的拉普拉斯和高斯分类器的性能相似测试表明,泛化与内核函数的属性有关,而与优化过程无关。深度学习的一些关键现象在现代“过拟合”状态下的核方法中也有类似的表现,本文提出的实验结果和理论结果的结合表明需要一种新的理论思想来理解经典核方法的性质。除非更好地理解更易于处理的“浅”内核方法,否则在理解深度学习方面的进展将很困难。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号