首页> 美国卫生研究院文献>other >Multiobjective Optimization for Model Selection in Kernel Methods in Regression
【2h】

Multiobjective Optimization for Model Selection in Kernel Methods in Regression

机译:回归核方法中模型选择的多目标优化

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art.
机译:回归在许多科学和工程问题中起着重要作用。回归的目标是从具有已知结果的一组样本向量中学习未知的潜在函数。近年来,回归中的核方法促进了非线性函数的估计。但是,仍然存在两个主要的(互连的)问题。第一个问题由偏差与方差的折衷给出。如果用于估算基本功能的模型过于灵活(即模型复杂度很高),则方差将非常大。如果模型是固定的(即低复杂度),则偏差会很大。第二个问题是定义一种选择内核函数的适当参数的方法。为了解决这两个问题,本文推导了一种新的平滑核准则,该准则用于测量估计函数的粗糙度,以衡量模型的复杂性。然后,我们使用多目标优化来得出选择该内核参数的准则。该标准的目标是在学习函数的偏差和方差之间找到一个折衷。也就是说,目标是在保持模型复杂性的同时增加模型拟合度。我们使用机器学习,模式识别和计算机视觉中的各种问题提供广泛的实验评估。结果表明,与现有技术中的方法相比,所提出的方法产生较小的估计误差。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号