【24h】

Conditional Neural Processes

机译:条件神经过程

获取原文
           

摘要

Deep neural networks excel at function approximation, yet they are typically trained from scratch for each new function. On the other hand, Bayesian methods, such as Gaussian Processes (GPs), exploit prior knowledge to quickly infer the shape of a new function at test time. Yet, GPs are computationally expensive, and it can be hard to design appropriate priors. In this paper we propose a family of neural models, Conditional Neural Processes (CNPs), that combine the benefits of both. CNPs are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent. CNPs make accurate predictions after observing only a handful of training data points, yet scale to complex functions and large datasets. We demonstrate the performance and versatility of the approach on a range of canonical machine learning tasks, including regression, classification and image completion.
机译:深度神经网络擅长于函数逼近,但通常会从头开始为每个新函数训练它们。另一方面,贝叶斯方法(例如高斯过程(GPs))利用先验知识来在测试时快速推断新函数的形状。但是,GP的计算量很大,并且很难设计适当的先验条件。在本文中,我们提出了一系列神经模型,即条件神经过程(CNP),它们结合了两者的优势。 CNP受诸如GP之类的随机过程的灵活性启发,但被构造为神经网络并通过梯度下降进行训练。 CNP仅观察少量训练数据点即可做出准确的预测,但可以扩展到复杂的功能和大型数据集。我们证明了该方法在一系列规范的机器学习任务(包括回归,分类和图像完成)上的性能和多功能性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号