首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets
【24h】

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

机译:大数据集上机器学习超参数的快速贝叶斯优化

获取原文
           

摘要

Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed FABOLAS, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that FABOLAS often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband.
机译:贝叶斯优化已成为机器学习算法(例如支持向量机或深度神经网络)的超参数优化的成功工具。尽管取得了成功,但对于大型数据集,训练和验证单个配置通常需要数小时,数天甚至数周,这限制了可实现的性能。为了加速超参数优化,我们提出了一个验证误差的生成模型,该模型是训练集大小的函数,可以在优化过程中学习该模型,并通过外推到完整数据集来探索小子集上的初始配置。我们构建了一个称为FABOLAS的贝叶斯优化程序,该程序将损失和训练时间建模为数据集大小的函数,并自动权衡有关全局最优值的高信息收益与计算成本。优化支持向量机和深度神经网络的实验表明,FABOLAS常常可以比其他最新的贝叶斯优化方法或最近提出的强盗策略Hyperband快10到100倍地找到高质量的解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号