...
首页> 外文期刊>Advances in Data Analysis and Classification >About the non-convex optimization problem induced by non-positive semidefinite kernel learning
【24h】

About the non-convex optimization problem induced by non-positive semidefinite kernel learning

机译:关于非正半定核学习引起的非凸优化问题

获取原文
获取原文并翻译 | 示例

摘要

During the last years, kernel based methods proved to be very successful for many real-world learning problems. One of the main reasons for this success is the efficiency on large data sets which is a result of the fact that kernel methods like support vector machines (SVM) are based on a convex optimization problem. Solving a new learning problem can now often be reduced to the choice of an appropriate kernel function and kernel parameters. However, it can be shown that even the most powerful kernel methods can still fail on quite simple data sets in cases where the inherent feature space induced by the used kernel function is not sufficient. In these cases, an explicit feature space transformation or detection of latent variables proved to be more successful. Since such an explicit feature construction is often not feasible for large data sets, the ultimate goal for efficient kernel learning would be the adaptive creation of new and appropriate kernel functions. It can, however, not be guaranteed that such a kernel function still leads to a convex optimization problem for Support Vector Machines. Therefore, we have to enhance the optimization core of the learning method itself before we can use it with arbitrary, i.e., non-positive semidefinite, kernel functions. This article motivates the usage of appropriate feature spaces and discusses the possible consequences leading to non-convex optimization problems. We will show that these new non-convex optimization SVM are at least as accurate as their quadratic programming counterparts on eight real-world benchmark data sets in terms of the generalization performance. They always outperform traditional approaches in terms of the original optimization problem. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite or indefinite kernel functions.
机译:在过去的几年中,基于内核的方法被证明对于许多现实世界中的学习问题非常成功。成功的主要原因之一是对大数据集的效率,这是因为诸如支持向量机(SVM)之类的内核方法都基于凸优化问题。现在,解决新的学习问题通常可以简化为选择适当的内核函数和内核参数。但是,可以证明,即使在最简单的数据集上,即使最强大的内核方法在使用的内核函数引起的固有特征空间不足的情况下仍然会失败。在这些情况下,显式特征空间转换或潜在变量检测被证明更加成功。由于这样的显式特征构造对于大型数据集通常是不可行的,因此有效的内核学习的最终目标将是自适应地创建新的适当内核功能。但是,不能保证这样的内核函数仍然会导致支持向量机的凸优化问题。因此,我们必须先增强学习方法本身的优化核心,然后才能将其与任意(即非正半定性)内核函数一起使用。本文鼓励使用适当的特征空间,并讨论导致非凸优化问题的可能结果。我们将证明,在泛化性能方面,这些新的非凸优化SVM至少与在八个实际基准数据集上的二次编程相对应的精度相同。就原始的优化问题而言,它们总是优于传统方法。此外,该算法比现有的传统解决方案更具通用性,因为它也适用于非正半定或不定核函数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号